# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 2032
# self = https://watcher.sour.is?uri=https://anthony.buc.ci/user/abucci/twtxt.txt&offset=1232
# next = https://watcher.sour.is?uri=https://anthony.buc.ci/user/abucci/twtxt.txt&offset=1332
# prev = https://watcher.sour.is?uri=https://anthony.buc.ci/user/abucci/twtxt.txt&offset=1132
@bender why on Earth would you say "I don’t mean to insult"? I don't give a shit and I don't care which source control methods people use. I'm not evangelizing for fossil--I have never used it. I was clarifying a point that appeared to have been missed.
@bender why on Earth would you say "I don’t mean to insult"? I don't give a shit I was clarifying a point that appeared to have been missed.
@eaplmx that looks a bit like a rabbit? 👀
@prologic how does one take advantage of this new feature? 🤔
#DeleteTwitter

A fairly prominent scala software developer, Alexandru Nedelcu, about why he finally decided to leave twitter.
> I’m a software developer. I don’t know what Twitter’s future is, it might be a bright one, but the problem for me is that Twitter is no longer the place where I can go to learn about programming. Or to find my peers. Twitter is no longer the place you go to talk of your passions, fruitful discussions being few and far between. Twitter is no longer fun, but rather it’s where you go to get your daily fix of unhinged political drama, and then worry that the world is going to shit.
#DeleteTwitter

A fairly prominent scala software developer, Alexandru Nedelcu, about why he finally decided to leave twitter.
> I’m a software developer. I don’t know what Twitter’s future is, it might be a bright one, but the problem for me is that Twitter is no longer the place where I can go to learn about programming. Or to find my peers. Twitter is no longer the place you go to talk of your passions, fruitful discussions being few and far between. Twitter is no longer fun, but rather it’s where you go to get your daily fix of unhinged political drama, and then worry that the world is going to shit.

Bleak. I guess having the guy who thinks we should migrate to Mars running the show over there makes Earth sound less pleasant than it is. Among other problems!
@bender My point was that fossil has built into it an issue tracker and forum capabilities. git does not. Third parties make these additional tools, which much improve the experience of using git, and one popular one, GitHub, has been purchased by Microsoft. That sucks.

It's silly to wave off such issues just because alternatives exist .Yes, there are alternatives, but it's a giant pain in the ass to have to uproot all your stuff from one tool and switch it to another. Almost surely, in the case of GitHub and now Gitea, you will lose information and people that way. You almost surely won't be able to transfer all the issues, discussions, wikis, etc. wholesale from one of those tools to an alternative. Even if you can, you probably won't be able to convince every single contributor to switch over to the new tool. None of that may matter for small or new hobby projects, but it's an enormous problem for larger, or older, or more well-established projects.

If you've ever been involved in or run a large software project you know what I'm talking about.
I was involved in an endeavor to switch ~20 developers from SVN to git in a company I worked once, and that process took *years* to complete. We lost information from virtually every project, and we gave up on a few projects because they would not convert easily and were too old to put that much energy into. Yes, the change happened eventually and was on the whole the results were great, but it was a big project spanning years to simply switch tools.~
@prologic hmm, I started out on RCS and felt like CVS was a huge leap forward. I never used Mercurial. If I did switch version control systems I think I'd try pijul because the workflow sounds so much safer and easier than git. But I have to admit, the fact that the fossil executable is only around 4 Mbyes and contains the source control stuff, issue tracker, forum, chat room, user management, and ability to serve remote developers makes it pretty attractive. You need separate tools or plugins for that stuff with gitand Microsoft bought one of them and another (Gitea) looks to be going down that road too.
@prologic hmm, I started out on RCS and felt like CVS was a huge leap forward. I never used Mercurial. If I did switch version control systems I think I'd try pijul because the workflow sounds so much safer and easier than git. But I have to admit, the fact that the fossil executable is only around 4 Mbyes and contains the source control stuff, issue tracker, forum, chat room, user management, and ability to serve remote developers makes it pretty attractive. You need separate tools or plugins for that stuff with gitanf Microsoft bought one of them and another (Gitea) looks to be going down that road too.
@bender yeah, it sounds interesting but I can't see myself switching from git anytime soon.
@prologic sure smells funny. First post is....that?
@bender https://www.sqlite.org/whynotgit.html
I feel like this is a best practice pattern for any kind of computer system:

- If the system generates something, the user should be able to have a "local" directory of "patches" of the thing that they can apply to whatever comes out of the generator.

What I have in mind as a motivating example is a code generator where you write some code or configuration in one language and it spits out code in another language. Almost always, you want to customize the output a little bit. But the problem that always arises then is that the code generator will obliterate your changes the next time it is run, so if you ever have to modify the code or configuration that feeds into the generator, you can expect your changes to be undone. However, if you could have local patches that are safe from the generator, you can re-apply those patches to the newly-generated code. Naturally the patches might not work on the modified code, depending on your originating change. But a lot of times they will, or they will work with minor edits.

I think this pattern is useful much more generally, though, and might help with the problems that typically arise when using a generator that doesn't quite generate everything you might want it to or the way you want it to.
@darch same!
@prologic ah yes that makes sense. Oh well one of these days 🤞
@prologic wow you're ramping up the visibility of yarn! When I get a chance to sit at my computer I'll check this out!
@prologic nice!
@prologic 12pm UTC? 🤔 I have the baby at this time on Saturdays, bit my wife has him at this time on Sundays and I'm always up (7am local time for me). If there's a chance of shifting the meeting to 12pm UTC on Sundays I can finally join one!
@xuu meh, I personally don't like following what some corporation is adding to a spec. Microsoft pioneered embrace, extend, extinguish and they have many copycats. I always prefer sticking to the spec or the spirit of the spec.
@xuu that doesn't seem to fit the spirit of the spec. The example on Wikipedia's webfinger page,
n
{
\t"subject": "acct:bob@example.com",
\t"aliases": [
\t\t"https://www.example.com/~bob/"
\t],
\t"properties": {
\t\t"http://example.com/ns/role": "employee"
\t},
\t"links": [{
\t\t\t"rel": "http://webfinger.example/rel/profile-page",
\t\t\t"href": "https://www.example.com/~bob/"
\t\t},
\t\t{
\t\t\t"rel": "http://webfinger.example/rel/businesscard",
\t\t\t"href": "https://www.example.com/~bob/bob.vcf"
\t\t}
\t]
}


and then the comparison with how mastodon uses webfinger,

n
{
    "subject": "acct:Mastodon@mastodon.social",
    "aliases": [
        "https://mastodon.social/@Mastodon",
        "https://mastodon.social/users/Mastodon"
    ],
    "links": [
        {
            "rel": "http://webfinger.net/rel/profile-page",
            "type": "text/html",
            "href": "https://mastodon.social/@Mastodon"
        },
        {
            "rel": "self",
            "type": "application/activity+json",
            "href": "https://mastodon.social/users/Mastodon"
        },
        {
            "rel": "http://ostatus.org/schema/1.0/subscribe",
            "template": "https://mastodon.social/authorize_interaction?uri={uri}"
        }
    ]
}


suggests to me you want to leave the subject/acct bit as is (don't add prefixes) and put extra information you care to include in the links section, where you're free to define the rel URIs however you see fit. The notion here is that webfinger is offering a mapping from an account name to additional information about that account, so if anything you'd use a "subject": "acct:SALTY ACCOUNT_REPRESENTATION" line in the JSON to achieve what you're saying if you don't want to do that via links.
@xuu that doesn't seem to fit the spirit of the spec, at least by my read (I could be wrong obv). The example on Wikipedia's webfinger page,
n
{
\t"subject": "acct:bob@example.com",
\t"aliases": [
\t\t"https://www.example.com/~bob/"
\t],
\t"properties": {
\t\t"http://example.com/ns/role": "employee"
\t},
\t"links": [{
\t\t\t"rel": "http://webfinger.example/rel/profile-page",
\t\t\t"href": "https://www.example.com/~bob/"
\t\t},
\t\t{
\t\t\t"rel": "http://webfinger.example/rel/businesscard",
\t\t\t"href": "https://www.example.com/~bob/bob.vcf"
\t\t}
\t]
}


and then the comparison with how mastodon uses webfinger,

n
{
    "subject": "acct:Mastodon@mastodon.social",
    "aliases": [
        "https://mastodon.social/@Mastodon",
        "https://mastodon.social/users/Mastodon"
    ],
    "links": [
        {
            "rel": "http://webfinger.net/rel/profile-page",
            "type": "text/html",
            "href": "https://mastodon.social/@Mastodon"
        },
        {
            "rel": "self",
            "type": "application/activity+json",
            "href": "https://mastodon.social/users/Mastodon"
        },
        {
            "rel": "http://ostatus.org/schema/1.0/subscribe",
            "template": "https://mastodon.social/authorize_interaction?uri={uri}"
        }
    ]
}


suggests to me you want to leave the subject/acct bit as is (don't add prefixes) and put extra information you care to include in the links section, where you're free to define the rel URIs however you see fit. The notion here is that webfinger is offering a mapping from an account name to additional information about that account, so if anything you'd use a "subject": "acct:SALTY ACCOUNT_REPRESENTATION" line in the JSON to achieve what you're saying if you don't want to do that via links.
@prologic oh shit it's Friday the 13th
@axodys ugh that stinks, sorry. Maybe nitter will go offline in time.
@axodys hmmm, nitter.net is still working for me?
@prologic Yeah captchas really suck. I kinda like the slide to submit idea though:



I saw this on a page about how to discourage bots without annoying people.
@justamoment @prologic hmm, does yarn have captcha on register? That'd probably do the trick in my case.
@prologic @darch I like this idea.
@prologic hmm, that might be a good admin feature. I don't think it's super high priority right now. Keeping an eye on a handful of new users and deleting the ones that don't post isn't a lot of work. But it there were several a day, it'd help a lot.
Buccipod's getting a lot of action lately let's see if any of these signups are real humans who post human stuff!
@prologic I wonder if we'll get a prologiczit
@prologic I don't think it's the structure of the pool that's the problem. I think its the fact that so many automated processes rely on it that I forget one or two when I need to perform maintenance. i have it all documented and monitored but I ignore all that and dive into "must fix nowwww" mode when the array has a problem and it bites me in the ass every time.
@prologic why do so many have names that end in "zit" through??? 😆
@bender I will if they are dwarves of Spamia!
@support oh no, not more *zits*
@prologic speak of the devil, I just had to do this today and as always it's a 😱

I have a pool with 7 disks arranged in 3 mirrors for content plus one for logs, following this person's advice. One of the disks in one of the mirrors started throwing errors yesterday night, apparently. It made a real mess because I sync backups to that array at midnight, I have a media server with music and movies running off it, I have an app that automatically takes snapshots and prunes old snapshots that runs regularly, etc etc etc. All that stuff was in various states of hung/failed/conflicted/angry because the array was much slower than usual. ZFS is great for remaining functional even in degraded state, but it can get slowwwwww.

I went through the procedure here as usual, except it looks like I forgot to stop a process that was using the array and it vomited all sorts of checksum errors and then I/O was suspended. This is what always stresses me out about this process, I forget something and for a brief moment I feel like I've fucked up the whole array.

Anyway, it's resilvering now and zpool status reports the blessed errors: No known data errors so I thinkkkk I'm OK.
@prologic speak of the devil, I just had to do this today and as always it's a 😱

I have a pool with 7 disks arranged in 3 mirrors for content plus one for logs, following this person's advice. One of the disks in one of the mirrors started throwing errors yesterday night, apparently. It made a real mess because I sync backups to that array at midnight, I have a media server with music and movies running off it, I have an app that automatically takes snapshots and prunes old snapshots that runs regularly, etc etc etc. All that stuff was in various states of hung/failed/conflicted/angry because the array was much slower than usual. ZFS is great for remaining functional even in a degraded state, but it can get slowwwwww.

I went through the procedure here as usual, except it looks like I forgot to stop a process that was using the array and it vomited all sorts of checksum errors and then I/O was suspended. This is what always stresses me out about this process, I forget something and for a brief moment I feel like I've fucked up the whole array.

Anyway, it's resilvering now and zpool status reports the blessed errors: No known data errors so I thinkkkk I'm OK.
I imagine something like this might exist already, but I'd like to see a visualization style for formal languages (such as computer languages and configuration languages) that starts with representative concrete examples and builds up the full grammar of the language from there.

I'm saying this after decades of reading language descriptions that start with a huge blob of formal grammar in BNF or some other CFG format, then exposition about what the blob means, and then only much much later, some full examples often without the details of how the examples would be parsed by the grammar that was just introduced.

Sure, given enough practice you can learn to read this, sort of. But it's a gigantic and needless pain in the ass, for no real gain. So of course what you (I?) learn to do after you've read a few of these is to skip past all the grammar stuff and look for the examples, which you then try to parse in your head while consulting the grammar stuff whenever you're confused.

It'd be much more straightforward to present a bunch of examples first, in such a way that you can see quickly and easily how each component of the example would be parsed. Have the full grammar and description in an appendix that's easy to consult if needed. I'm pretty sure both beginners and experts would benefit from this way of writing such things.
I imagine something like this might exist already, but I'd like to see a visualization style for formal languages (such as computer languages and configuration languages) that starts with representative concrete examples and builds up the full grammar of the language from there.

I'm saying this after decades of reading language descriptions that start with a huge blob of formal grammar in BNF or some other CFG format, then exposition about what the blob means, and then only much much later, some full examples often without examples of how the examples would be parsed by the grammar that was just introduced.

Sure, given enough practice you can learn to read this, sort of. But it's a gigantic and needless pain in the ass, for no real gain. So of course what you (I?) learn to do after you've read a few of these is to skip past all the grammar stuff and look for the examples, which you then try to parse in your head while consulting the grammar stuff whenever you're confused.

It'd be much more straightforward to present a bunch of examples first, in such a way that you can see quickly and easily how each component of the example would be parsed. Have the full grammar and description in an appendix that's easy to consult if needed. I'm pretty sure both beginners and experts would benefit from this way of writing such things.
@prologic there was a Unison conference last year and they put their talks online here: https://www.youtube.com/playlist?list=PLQ0IlHfOk1GgbXSZAjOOls9PnrO4Dpsbb . Also worth a watch to get a better feel for the language and runtime capabilities.
@prologic I don't know! Maybe there's a way to get in touch with the organizers? I just stumbled on this so I thought I'd send it along.
eek, can't edit posts in Goryon 😭
Hey @prologic would you want to represent yarn at DecebtSocial?

> DecentSocial is an online unconference for the builders of the decentralized social web (e.g. ActivityPub, Mastodon, Goblin, Scuttlebutt, P2Panda, Earthstar).

> When: Feb 11th (UTC)
@prologic yeah, very irritating on so many levels. I burn through a lot of PTO making up gaps in childcare. What a world.
@prologic One of those links is a video. Watch that when you get the chance. The stuff they can do is pretty amazing. It could totally change for the better how software development is done.
@darch oh my god
@darch is the text crooked?!?!
The Unison™ Cloud Platform | Write code. Hit run. The cloud computes.

Unison is such a nice language and runtime in my opinion (if you're into statically-typed functional languages, anyway). The codebase manager with content-addressable code is a cool innovation. This seems like one of those languages that, once mature, would be very hard to give up once you got used to using it. I should give a disclaimer that I met and worked a bit with one of the creators of Unison in graduate school many years ago.

The Unison Cloud platform, which lets you run code on a managed cloud with nearly zero configuration and very little fanfare in the code itself, is in public beta. The link above has the details and a video about it. Worth a look if you're into that sort of thing.
The Unison™ Cloud Platform | Write code. Hit run. The cloud computes.

Unison is such a nice language and runtime in my opinion (if you're into pure functional languages in the ML family, anyway). The codebase manager with content-addressable code is a cool innovation. This seems like one of those languages that, once mature, would be very hard to give up once you got used to using it. I should give a disclaimer that I met and worked a bit with one of the creators of Unison in graduate school many years ago.

The Unison Cloud platform, which lets you run code on a managed cloud with nearly zero configuration and very little fanfare in the code itself, is in public beta. The link above has the details and a video about it. Worth a look if you're into that sort of thing.
@bender I listened to a podcast interview with the woman who now owns Metafilter and she's not a billionaire asshole nor does she appear to aspire to be, so there's that.

Also, I read their community guidelines and liked what they said.

Also also, they pay human beings to moderate the content, which is very good. 👍

I have no idea how much I'll use it. Only time will tell. But it seemed worth checking out.
We just had to cancel backup childcare because the person the service was going to send was refusing to take a COVID test and refusing to use the N95 masks we ask people to wear (we provide both the tests and the masks). Where I am located, COVID transmission is bad right now and there's no excuse for not being careful. Plus there's no way we're putting our baby and people we care about at risk of COVID just because someone has issues with masks and testing. I got some serious anti-vaxxer vibes from this person, so no way, cancelled.

But that means we're currently without childcare for tomorrow. Again. The world (US?) has devolved so much that realistically you need two layers of backup nowadays to reach the same level of assurance you used to get with one layer. This is the third time in a year that we've been in this position where our primary caregiver couldn't make it, then the backup caregiver fell through and we needed to find *another* caregiver.

There's a lot of truth to the saying that it takes a village to raise a child.
Wow I managed to get my preferred username on Metafilter, which I'd never joined before.
If you've ever done that, you know it can be a delicate process of slowly introducing chemicals, bacteria, plants, fish, and devices into the water at just the right times to kickstart a "virtuous cycle" wherein the wastes produced by the fish are consumed by the bacteria and plants and recycled into forms that are not dangerous to the fish (and possibly helpful to them). It's very easy to mess this up, and it can take months to cycle a large tank properly. Once cycled it's mostly self-sustaining, though still requires some tweaks now and then. At that point you can introduce more delicate and cooler fish.

You're basically manually creating a hospitable ecosystem for fish on an accelerated timeline.

I like to think of software development like this, because complex software--or more precisely, the complex purposes/functions the software serves--is a fairly delicate beastie that requires a carefully-constructed development system in which to flourish. It's very easy to screw this up, and software "dies" when you do. It can take many months or even years to cultivate such a system. "Management" is often the enemy of this process, with its artificially-forced deadlines and inattention to the realities of a healthy software process.
If you've ever done that, you know it can be a delicate process of slowly introducing chemicals, bacteria, plants, fish, and devices into the water at just the right times to kickstart a "virtuous cycle" wherein the wastes produced by the fish are consumed by the bacteria and plants and recycled into forms that are not dangerous to the fish (and possibly helpful to them). It's very easy to mess this up, and it can take months to cycle a large tank properly. Once cycled, though, it's mostly self-sustaining, though still requires some tweaks now and then. At that point you can introduce more delicate and cooler fish.

You're basically manually creating a hospitable ecosystem for fish on an accelerated timeline.

I like to think of software development like this, because complex software--or more precisely, the complex purposes/functions the software serves--is a fairly delicate beastie that requires a carefully-constructed development system in which to flourish. It's very easy to screw this up, and software "dies" when you do. It can take many months or even years to cultivate such a system. "Management" is often the enemy of this process, with its artificially-forced deadlines and inattention to the realities of a healthy software process.
If you've ever done that, you know it can be a delicate process of slowly introducing chemicals, bacteria, plants, fish, and devices into the water at just the right times to kickstart a "virtuous cycle" wherein the wastes produced by the fish are consumed by the bacteria and plants and recycled into forms that are not dangerous to the fish (and possibly helpful to them). It's very easy to mess this up, and it can take months to cycle a large tank properly. Once cycled, though, it's mostly self-sustaining, though still requires some tweaks now and then. At that point you can introduce more delicate and cooler fish.
@eaplmx More than a third of world’s population have never used internet, says UN | Internet | The Guardian

Technology use, and having one's life overrun by it, is an extremely privileged state of being.
Random thought: developing a software project that's meant to continue existing for awhile (e.g. larger than a hobby project) is like cycling a fish tank.
@darch I almost dropped my phone why is the text crooked??
@bender same. If/when I can buy an app for a flat fee I do. I don't mind paying for good software but I don't want to rent it for the rest of my life.
@prologic this stuff is so fucked.
@prologic gotcha 👌
I like to use the gajim XMPP app on my desktop, but after the last update to v1.6.0 it stopped launching (segfault!). In the process of trying to debug it with the help of some people in the
I like to use the gajim XMPP app on my desktop, but after the last update to v1.6.0 it stopped launching (segfault!). In the process of trying to debug it with the help of some people in the gajim support channel, I discovered that gajim runs just fine if I launch it within gdb (the debugger), but not if I launch it as usual.

A bona fide Heisenbug! I haven't run across one of these in years!
I managed to get Portal with RTX to run on my computer even though I have a several-years-old NVIDIA GPU. I get about 30 fps with reasonably nice looking graphics settings, which isn't bad given that this is basically a tech demo to show off NVIDIA's brand new high-end GPUs. Full realtime path tracing looks amazing! Given the amount of computation required, it's mind-boggling this can be done at 30 fps--that means 30 times per second the GPU can compute a 3444x1440 frame and fire that at the monitor, which requires tracing the paths of simulated beams of light for each one of those pixels (modulo a bunch of optimizations and tricks of course).
I managed to get Portal with RTX to run on my computer even though I have a several-years-old NVIDIA GPU. I get about 30 fps with reasonably nice looking graphics settings, which isn't bad given that this is basically a tech demo to show off NVIDIA's brand new high-end GPUs. Full realtime path tracing looks amazing! Given the amount of computation required, it's mind-boggling this can be done at 30 fps--that means 30 times per second the GPU can compute a 3444x1440 frame and fire that at the monitor, where each frame requires tracing the paths of simulated beams of light for each one of those pixels (modulo a bunch of optimizations and tricks of course).
I managed to get Portal with RTX to run on my computer even though I have a several-year-old NVIDIA GPU. I get about 30 fps with reasonably nice looking graphics, which isn't bad given that this is basically a tech demo to show off NVIDIA's brand new high-end GPUs. Full realtime path tracing looks amazing! Given the amount of computation required, it's mind-boggling this can be done at 30 fps--that means 30 times per second the GPU can compute a 3444x1440 frame and fire that at the monitor, which requires tracing the paths of simulated beams of light for each one of those pixels (modulo a bunch of optimizations and tricks of course).
I managed to get Portal with RTX to run on my computer even though I have a several-years-old NVIDIA GPU. I get about 30 fps with reasonably nice looking graphics, which isn't bad given that this is basically a tech demo to show off NVIDIA's brand new high-end GPUs. Full realtime path tracing looks amazing! Given the amount of computation required, it's mind-boggling this can be done at 30 fps--that means 30 times per second the GPU can compute a 3444x1440 frame and fire that at the monitor, which requires tracing the paths of simulated beams of light for each one of those pixels (modulo a bunch of optimizations and tricks of course).
I managed to get Portal with RTX to run on my computer even though I have a several-year-old NVIDIA GPU. I get about 30 fps with reasonably nice looking graphics, which isn't bad given that this is basically a tech demo to show off NVIDIA's brand new high-end GPUs. Full realtime path tracing looks amazing!
@justamoment Hmm yes, that's a good point. I think if I gave out my number like that I'd pay more attention to my phone too.
On a site with many users, like twitter or mastodon, I think it's important to know whether something you say is likely to reach only 10 people, 100 people, or 1,000 people or more. There are things I'd say when only a handful of people are likely to hear them that I would not say if I knew 1,000s might, which I think is probably true of many people.
On a site with many users, like twitter or mastodon, where follower counts can become high, I think it's important to know whether something you say is likely to reach only 10 people, 100 people, or 1,000 people or more. There are things I'd say when only a handful of people are likely to hear them that I would not say if I knew 1,000s might, which I think is probably true of many people.
@prologic mastodon makes that configurable, which I think is a decent approach. I actually like to watch my follower count because it can be a good signal for weird stuff happening (especially if it goes up or down by more than it "should" in a short period of time). Saying "don't include this because people will chase the metric" is.....well.......not the most nuanced reason.
It seems there's a fair amount of specialized hardware being made to accelerate "deep learning" these days. I wonder what else can be done with that hardware. I haven't yet had the time or inclination to read up on what exactly these things do.
It seems there's a fair amount of specialized hardware being made to accelerate "deep learning" these days. I wonder what else can be done with that hardware.
FAR: Changing Tides is such a lovely game.
@carsten I never answer the phone unless I know who is calling and we have a scheduled call. 99/100 times an unscheduled caller is not someone I want to speak to. Either they are spam, or they are companies I have accounts with trying to sell me new services, or they are political calls. It's so bad I don't watch my phone for calls, and I don't even know where it is much of the time. I use kdeconnect to monitor texts on my computer, which is where I am sitting most of the time.

If someone I know wants to talk on the phone, we arrange by text, email, Slack, whatever works, pick a time, and then one of us calls the other. I have them in my address book so I see their name when the call comes in. I am expecting their call, so I actually locate my phone and keep it nearby.

If there is something "wrong" with me because of that, so be it. I'd rather be wrong than deal with so much wasted time and aggravation every day!
@carsten I never answer the phone unless I know who is calling and we have a scheduled call. 99/100 times, the caller is not someone I want to speak to. Either they are spam, or they are companies I have accounts with trying to sell me new services, or they are political calls. It's so bad I don't watch my phone for calls, and I don't even know where it is much of the time. I use kdeconnect to monitor texts on my computer, which is where I am sitting most of the time.

If someone I know wants to talk on the phone, we arrange by text, email, Slack, whatever works, pick a time, and then one of us calls the other. I have them in my address book so I see their name when the call comes in. I am expecting their call, so I actually locate my phone and keep it nearby.

If there is something "wrong" with me because of that, so be it. I'd rather be wrong than deal with so much wasted time and aggravation every day!
OK, we're back!
@prologic wooo that is a lot of code
@prologic I've done this many times myself (been using ZFS on my main storage cluster since 2013 or so) and I still stress about it every time 😨 Even though I've never had anything bad happen during a disk replacement and I have backups, it still worries me.
@prologic This seems to have worked--@win0err emailed me saying that my pod is no longer fetching their feed every five minutes. Thanks for your help!
@prologic no worries! It got sorted, which is what matters.
@prologic sure just point me at the right spot in the code. I actually have sone free time tomorrow so it's good timing!
@prologic not to be a jerk, but exponential backoff is a one-liner in most of the scala stream processing libraries of note. 🙈
@win0err glad to help! Trying to be a good netizen over here.
@prologic in my personal life I'm almost completely cloud free. My organization uses a bunch though and I have no control over that.
Hope is a practice. It's not a feeling that comes to you randomly the way rain does, when events just happen to go well. Every day, you have to spend a little time thinking about what good you want to help bring into the world and if you have the time and energy do a little thing to make that good thought real. Like going to the gym, or studying a new language. You need to practice to make it real.

If you don't do this, you're stuck with what powerful people want of you, thinking and feeling little else from what they direct you to think and feel. Pessimistic, only able to see a future that someone else is constructing and that only suits them and their own goals, not yours (and god help you if what they're constructing actively wants to eliminate you and people like you). Optimistic only to the extent that you might get a little of what trickles down.

I heard a quote once that said "we are just the shit that rich people grow their money in" and I think there's truth to it, but only if we accept that condition and do nothing against it. There is power in the raw assertion of "no", simply refusing to accept whatever is being told to you is "just how it is". You may not have the physical power or resources to change the conditions you're in--very few of us do--but you have power over your own mind and thoughts, and you do not need to acquiesce and accept. That's a choice you make for yourself.
@ocdtrekkie yeah....hopefully recent events undo a lot of the hype.
@eaplmx I prefer to let my values speak loudly and clearly. Then no one is confused, including me. I think it's a good practice when you live in an age when so many are spineless and mealy mouthed toadies to the powerful, or compliant, or kazy, or.....

On that score, it is clearly and plainly wrong to experiment on psychologically distressed people who are seeking help, unless they are fully aware that that's what's happening and agree to it.
@eaplmx people thought smoking would be around forever. Till it wasn't. Change happens quickly when the conditions allow it. Right now we have a handful of extremely rich and powerful people trying to stop all progress because it suits their power, and that's why things seem "inevitable". But that is an illusion, and I reject it. It's certainly not "realistic" except in the most unkind interpretation of that word.
Malicious PyPi packages create CloudFlare Tunnels to bypass firewalls

🤦‍♂️
Serious breaches at LastPass, Slack, and CircleCI really make you wonder why anyone uses or trusts cloud services.
@eaplmx it will not be used more and more if we stop it. This is not gravity we're talking about. They are human choices, ultimately influenced by policies that can be changed.

You sound very fatalistic.
@eaplmx there's nothing tricky about leading a person in distress into believing they are speaking to a human being when they are not. That is lying, it is unethical, and it is inhuman.
This fucking GPT-3 mania needs to end. What a goddamn nightmare.
"mostly adolescents" is what his web site says about who they target. There are very strict ethics rules around experimenting on children.
This guy should be put in prison for this, frankly. How I Lied To People Who Need Mental Health Support And Performed An Unethical Experiment On Them
@prologic gotcha. I did the reset so we'll see!
@prologic I just need a lost if instructions I can follow please--I feel like this is getting way more complicated/convoluted than it needs to be.

If I got to poderator settings and refresh cache, will that force my pod to start respecting the refresh he has set? I recently restarted my pod after an upgrade (yesterday I think). Wouldn't that have reset the cache?
@prologic I just need a list of instructions I can follow please--I feel like this is getting way more complicated/convoluted than it needs to be.

If I got to poderator settings and refresh cache, will that force my pod to start respecting the refresh he has set? I recently restarted my pod after an upgrade (yesterday I think). Wouldn't that have reset the cache?