# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 2172
# self = https://watcher.sour.is?uri=https://twtxt.net/user/mckinley/twtxt.txt&offset=2072
# prev = https://watcher.sour.is?uri=https://twtxt.net/user/mckinley/twtxt.txt&offset=1972
@prologic I use Redirector, the first one @sorenpeter mentioned.

* https://addons.mozilla.org/en-US/firefox/addon/redirector/
* https://chromewebstore.google.com/detail/redirector/ocgpenflpmgnfapjedencafcfakcekcd
@eldersnake A *huge* effort. Andreas Kling is the lead of the SerenityOS project and he makes great videos on his YouTube channel. It's mostly been monthly updates lately on SerenityOS and Ladybird but he also has a lot of programming videos where you get to see his process, fixing a bug or adding a feature from start to finish. I highly recommend his channel.
@prologic There is JavaScript, but not everything is implemented (properly). They're writing everything including the JavaScript engine from scratch.
It worked! I can't reply to a message (this was posted from the conversation view) and the hamburger menu when the screen is narrow doesn't work, but it's getting much closer.
If you're reading this, it is now possible to post on twtxt.net using Ladybird!
@jsreed5 I had a public network block my personal Wireguard connections on port 51820 but my VPN service using Wireguard on port 1637 wasn't blocked. I don't know what they think they're accomplishing. It was at a hotel, where people might feasibly need to connect to a VPN for work.
@jsreed5 I had a public network block my personal Wireguard connections on port 51820 but my VPN service using Wireguard on port 1637, wasn't blocked. I don't know what they think they're accomplishing. It was at a hotel, where people might feasibly need to connect to a VPN for work.
@jsreed5 I had a public network block my personal Wireguard connections on port 51820 but my VPN service using Wireguard on port 1637 wasn't blocked. I don't know what they think they're accomplishing. It was at a hotel, where people might feasibly need to connect to a VPN for work.
To everyone reading this, please make sure the elderly people in your life know to be very skeptical of unsolicited messages from companies, banks, government institutions, and pop-ups that say their computer is infected.

I would recommend getting them the hell off of Windows as well if you can, installing uBlock Origin in their browser, and disabling all browser notifications. Linux Mint is a great distribution for non-technical people. Just tell them to only install software from the Software Manager application and to think of it like the app store on their phone.
@bender These sorts of scams are a huge problem and gift cards are an easy way to move money around anonymously. There are a few different common types of scams, but they usually involve someone logging into the victim's computer using a remote desktop utility like TeamViewer and asking him for money under some false pretense. If the victim won't pay, the scammer will sometimes lock down the computer so they can't use it.

Usually, it's nothing a reinstall won't fix but if they can change the password/recovery of the Microsoft account and the disk is encrypted (which is the default if you sign in to a Microsoft account on Windows 11) it can be impossible to get their data back without the help of Microsoft support, who will treat you as if *you're* the one trying to steal the account. It is important to remember that the people running these types of scams don't have much deep technical knowledge (if they did, they could get a real job) so I've never heard of that happening but it is a serious risk.
It's been known for some time that AI actually stands for "A lot of Indians".
@prologic No
@muayboranacademy Huh, a twtxt feed hosted on Google Drive.
A careless rm -rf just got me, big time. I realized what had happened and stopped it in less than a second, but it had already deleted ~3000 (70 GiB) of files I didn't want to delete. Luckily I had backups in Restic.

Fun fact: This is the first time I've had to restore more than a file or two from any of my Restic repositories.
@bender I see you host your own relay. Which implementation are you using, and how did it go setting it up?
@bender I see you host your own relay. Which one are you using, and how did it go setting it up?
@bender Maybe I'll get back into it at some point. I think it would be a little excessive to have a standard twtxt, a rich twtxt, *and* a Nostr feed, not to mention a regular blog and a separate "notes" section on my website.
@bender I don't have one. When I was looking into Nostr, I couldn't find a client I liked so I put it on the back burner. Which one are you using?
@prologic No pain here. There's no important data on them, and the first 1/4 of the drives work reliably enough that there weren't any issues before I had to shelf it. This is just for fun. I don't even think I'd consider it a war game.
@prologic No pain here. There's no important data on them, and the first portion of the drives work reliably enough that there weren't any issues before I had to shelf it. This is just for fun. I don't even think I'd consider it a war game.
@prologic There's no important data on them, and the first 1/4 of the drives work reliably enough that there weren't any issues before I had to shelf it. This is just for fun. I don't even think I'd consider it a war game.
@mckinley It booted. I was going to do more but I had actual work to do so I shelved it. Maybe I'll come back to it another time. These drives are in really bad shape, though. They hold up udev by 30-60 seconds on every boot, even when booting the Arch install ISO, covering the console with lots of SATA errors and timeouts I don't really understand.

Badblocks via mkfs.ext4 -cc was taking too long on the full 1+1 TB array so I made new 250 GB partitions and neither drive had bad blocks in that range so it was just a waste of time. Maybe if I come back to it I'll do the full array and have the EFI system partition in RAID 1 just for fun. I didn't know that worked with software RAID.

> The key part is to use --metadata 1.0 in order to keep the RAID metadata at the end of the partition, otherwise the firmware will not be able to access it.

I had the ESP on a USB stick for simplicity's sake.
@mckinley It booted. I was going to do more but I had actual work to do so I shelved it. Maybe I'll come back to it another time. These drives are in really bad shape, though. They hold up udev by 30-60 seconds on every boot, even when booting the Arch install ISO, covering the console with lots of SATA errors and timeouts I don't really understand.

Badblocks via mkfs.ext4 -cc was taking too long on the full 1+1 TB array so I made new 250 GB partitions and neither drive had bad blocks in that range so it was just a waste of time. Maybe if I come back to it I'll do the full array and have the EFI system partition in RAID 1 just for fun. I didn't know that worked with software RAID.

> The key part is to use --metadata 1.0 in order to keep the RAID metadata at the end of the partition, otherwise the firmware will not be able to access it.

I had the ESP on a USB stick for simplicity's sake and booted from that.
@prologic I can't really commit to that. Don't plan anything around me.
@shreyan Same here. I work relatively late so I'm never up that early.
@prologic Nice! Save some marshmallows for me.
@prologic Any of the above
#QOTD : If you could redesign a fundamental internet protocol from scratch, which one would you choose and how would you improve it?
#QOTD: If you could redesign a fundamental internet protocol from scratch, which one would you choose and how would you improve it?
@rrraksamam I'm looking forward to my all-SSD NAS. I think it'll be a while, though. I just paid $6.92/TB for a couple of used 12TB HDDs.
@rrraksamam I'm looking forward to my all-SSD Btrfs RAID5 NAS. I think it'll be a while, though. I just paid $6.92/TB for a couple of used 12TB HDDs.
@prologic They're shutting down after 7 years. It was a great place to buy Monero with cash by mail. https://localmonero.co/nojs/blog/announcements/winding-down
@aelaraji Nice. Compiling problematic software is my #1 use of containers on my PC. I use a handful of them on my server.
@lyse Same here. Where does it not work, @movq?
@lyse Same here. Where does it not work, @movq
@movq People just don't think about these questions. It's really a serious privacy issue, and I don't see it brought up very often. Not even in privacy-minded circles. If you're using a proprietary operating system on any Internet-connected device, you need to assume that the vendor can see everything you do on it and maybe even what you do on other devices as well..
@movq People just don't ask these questions. It's really a serious privacy issue, and I don't see it brought up very often. Not even in privacy-minded circles. If you're using a proprietary operating system on any Internet-connected device, you need to assume that the vendor can see everything you do on it and maybe even what you do on other devices as well..
Actually, it looks like notifications using Google's service *can* be encrypted end-to-end. I don't know if this is used much in practice or if you can tell if the notifications on *your* device are encrypted. There seems to be some conflicting information out there.

Even if the content is encrypted, though, you're still giving quite a bit of metadata to Google by using their notification service.
It looks like ntfy.sh can work either through the OS's notification service or by maintaining its own connection to the server in the background. For privacy, you definitely want to use "Instant Delivery" and self-host the server.

https://docs.ntfy.sh/faq/#how-much-battery-does-the-android-app-use
https://docs.ntfy.sh/faq/#what-is-instant-delivery
@movq I haven't done any app development, but I know notifications on phones are indeed dependent on cloud services run by the OS vendor which talk to servers run by the app vendor on your behalf. This is supposedly better on battery life, but it conveniently lets your OS vendor read all your notifications.

Mobile XMPP clients usually implement notifications using XEP-0537 and it goes like this:


Your XMPP server -> Client vendor's notification server -> Client OS notification server -> User's device



It's not end-to-end encrypted so servers will usually just send a dummy message through (You received a message from juliet@capulet.lit!) so you have to open the app to see the (hopefully) encrypted message.
It's a similar flow on both iOS and Android and I assume Matrix clients work the same way.
@prologic I know, right? It's a very elegant solution to the problem using standard command line utilities. It was too hard to find. I went through 3 or 4 Stack Exchange threads from my Web search before I found somebody linking to this answer. People were misunderstanding the question and suggesting all kinds of crazy methods including weird, proprietary, GUI Windows software.
How To Efficiently Copy Files To Multiple Destinations: https://mckinley.cc/notes/20240508-copy-multiple-destinations.xhtml
@prologic I can't recommend it enough.
@movq


$ units -t '500 gigabytes per 9 hours' 'megabytes per second'
15.432099



That's very unfortunate.
@movq


$ units -t '500 gigabytes per 9 hours' 'megabytes per second'
15.432099



That's a very unfortunate speed in the year 2024.
@movq That's no fun at all. I don't like to throw away working hardware either, but I wouldn't wait 7 hours (CPU-bound!) for my manual backup to complete if it could be done faster on a 10 year old laptop with AES-NI. How much data did you add?
Speaking of which @prologic, have you heard from @ocdtrekkie lately? He's active on mastodon but I haven't seen him around here in a long time.
Speaking of which @prologic, have you heard from @ocdtrekkie lately? He's active on mastodon but I haven't seen him around here in a long time.
Speaking of which @prologic, have you heard from @ocdtrekkie lately?
Speaking of which @prologic, have you heard from @ocdtrekkie lately?
@prologic I agree with @movq. Good documentation is better than an interactive setup process. My difficulties (#isyb2aq) were because I was just doing it for testing and I wanted it running as quickly as possible. If I was running it in a production capacity, I would read through the documentation.

If you're trying to make non-technical people set up their own Yarn pod, that's probably (unfortunately) impossible. Management software like Sandstorm make it "as easy as installing apps on your phone" (direct quote from sandstorm.org) and most people still pay Google to store their photos.

I remember you were trying to do paid hosting for Yarn pods in the past. That could work, but as I'm sure you know it's difficult to convince people to use this over X or Facebook, let alone host their own pod. I think it's going to stay a small community of fairly technical people for the foreseeable future.
I did it again... #cm7e3ya #s4nbfta

I edited it because I started the line with 500., which the Markdown parser took as the start of an ordered list and made it number 1.
@movq I do wonder that sometimes, but I try to take notes if I'm doing something complicated. Just a few lines in a text file with some context plus the command I used. ffmpeg.txt comes in very handy.
It's 500. I never changed it, so that's the default of either Bash or my distro. It's fine for me.
500. I never changed it, so that's the default of either Bash or my distro. It's fine for me.
@bender That's what I suspected. I compared the text, including the alt text for the image. I guess I didn't read it carefully enough.

No worries @aelaraji, it happens to the best of us.
Why are there two threads for the same post? #2hhvp2a #kz5qjza
@aelaraji I'm definitely putting that in the list. I like tmux but I just can't wrap my head around the controls. This looks more like a tiling window manager.
@aelaraji I'm definitely putting that in the list. I like tmux but I
@aelaraji Is that a terminal multiplexer? If so, which one? I suspect it says at the top but I can't quite read the text.
@bender Fair point... :)
@prologic Planning it ahead of time is all well and good if you have the money to buy 6 or 8 hard drives at once. I really don't, and I want to mirror the whole thing offsite anyway. Mergerfs will let me do it now, and I'll buy a drive each for SnapRAID in short order.
QOTD: Have you ever suffered significant data loss? If so, what went wrong?
@bender Ha, we both looked it up at once. You win.
@bender Synology uses single-volume Btrfs on software RAID, which seems to be pretty solid in my research but that's less flexible than ZFS. https://kb.synology.com/en-us/DSM/tutorial/What_was_the_RAID_implementation_for_Btrfs_File_System_on_SynologyNAS
@bender Exactly. It's just not an option with warnings like that all over the place. Some people have had success, but I'm not risking it. https://lore.kernel.org/linux-btrfs/20200627032414.GX10769@hungrycats.org/
@prologic ZFS is fine but it's out-of-tree and extremely inflexible. If Btrfs RAID5/6 was reliable it would be fantastic. Add and remove drives at will, mix different sizes. I hear it's mostly okay as long as you mirror the metadata (RAID1), scrub frequently, and don't hammer it with too many random reads and writes. However, there are serious performance penalties when running scrubs on the full array and random reads and writes are the entire purpose of a filesystem.

Bcachefs has similar features (but not all of them, like sending/receiving) and it doesn't have the giant scary warnings in the documentation. I hear it's kind of slow and it was only merged into the kernel in version 6.7. I wouldn't really trust it with my data.

I bought a couple more hard drives recently and I'm trying to figure out how I'm going to allocate them before badblocks completes. I have a few days to decide. :)
@bender There's stagit which generates static HTML files
@prologic I remember running yarnd for testing on a couple of different occasions and both times I found all the required command line options to be annoying. If I remember correctly, running it with missing options would only tell you the first one that was missing and you'd have to keep running it and adding that option before it would work.

This was a couple of years ago, so I don't know if anything's changed since then. It's really not a big problem, because it would be run with some kind of preset command line (systemd service, container entrypoint) in a production environment.
@bender I avoid install scripts like the plague. This isn't Windows and they're usually poorly written. I think it's better to prioritize native packages (or at least AUR, MPR, etc) and container images.
@prologic That's good advice. I don't open any ports to the Internet if I can possibly avoid it. Everything is on Wireguard, even stuff that doesn't really need to be. It's super easy to set up on other people's computers, too. Even on Windows.
@prologic Both are very nice in my opinion. I don't think you could make a mistake with either, at least when it comes to looks.
@prologic I think this would be solved in the short to mid-term by fixing the mute function. Or, maybe, adding a "Hide this user from Discover" button.
@prologic Picnic CSS is my favorite one on first glance.
@prologic Are they changing unique IDs? I hate when people do that. If I ever do that with any of my feeds, feel free to mock me relentlessly.
@bender Makes sense. We definitely need the ability to mute feeds from the Discover feed.
@movq I remember your solution. It's very simple, I like it.

Yes, my backup target is my home server. I have a hard drive dedicated to Restic repositories. It's still not a real backup as I don't have anything offsite but it's better than my previous solution. I had two very old hard drives I kept plugged in to my desktop PC and I would (on very rare occasion) plug in another hard drive and copy all the files over to it. Luckily, I've never suffered any significant data loss and I would rather not start now. Once I have automated backups on each of my machines, the next project is getting those backups offsite.
@prologic I think one-way feeds are okay and we shouldn't discourage them so strongly. On the other hand, I think it's the duty of a poderator to filter out feeds that are just noise from the Discover feed. I definitely consider a truckload of one-way posts mostly in another language to be noise. Did you get rid of Gopher Chat too? I'd call that noise, for sure.
@prologic I think one-way feeds are okay and we shouldn't discourage them so strongly. On the other hand, I think it's the duty of a poderator to filter out feeds that are just noise from the Discover feed. I definitely consider a truckload of one-way posts mostly in another language to be noise. Did you get rid of Gopher Chat too? I'd call that noise, too.
@bender Standard twtxt is a microblog in its purest form. A blog, but smaller. It's just a list of posts to read, and that's an echochamber in the same way my regular blog is an echochamber. I don't think there's anything wrong with that.

@prologic I support the delisting of ciberlandia.pt in the Discover feed due to the sheer volume of posts from there and the fact that most of them are in Portuguese with this being a predominantly English-language pod.
@prologic Why do we need to avoid posting to the void? That's pretty much what twtxt was made for. I don't like the "Legacy feed" terminology, either. I support the delisting of ciberlandia.pt but I think this change is heading in a bad direction.

I like @sorenpeter 's suggestion. It gives the users the information and lets them make their own decision instead of putting a big scary warning in their face. That's what Microsoft does, and we shouldn't be Microsoft.
@prologic How do you manage multiple remotes? Do you just run restic backup for each one?
I wish there was a good GUI for Restic so I could have non-technical people using the same thing I do.
QOTD: How do you back up your files?

I asked this one almost a year ago and I started using Restic shortly after that. When I started, I was only backing up my home folder to the repository over NFS. Now, I'm backing up the entire root filesystem to a repository using the REST backend so I can run Restic as root without breaking the permissions.

I'm working on automating it now and I'm trying to come up with something using pinentry but my proof-of-concept is getting pretty obtuse. It will be spread out in a shell script, of course, but still.


systemd-inhibit --what=handle-lid-switch restic --password-command='su -c "printf '"'"'GETPIN\n\'"'"' | WAYLAND_DISPLAY=wayland-1 pinentry-qt5 | grep ^D | sed '"'"'s/^D //'"'"'" mckinley' --repository-file /root/restic-repo backup --exclude-file /root/restic-excludes --exclude-caches --one-file-system /



I'm curious to see how everyone's backup solutions have changed since last year.
QOTD: How do you back up your files?

I asked this one almost a year ago and I started using Restic shortly after that. When I started, I was only backing up my home folder to the repository over NFS. Now, I'm backing up the entire root filesystem to a repository using the REST backend so I can run Restic as root without breaking the permissions.

I'm working on automating it now and I'm trying to come up with something using pinentry but my proof-of-concept is getting pretty obtuse. It will be spread out in a shell script, of course, but still.


systemd-inhibit --what=handle-lid-switch restic --password-command='su -c "printf '"'"'GETPIN\\n\\'"'"' | WAYLAND_DISPLAY=wayland-1 pinentry-qt5 | grep ^D | sed '"'"'s/^D //'"'"'" mckinley' --repository-file /root/restic-repo backup --exclude-file /root/restic-excludes --exclude-caches --one-file-system /



I'm curious to see how everyone's backup solutions have changed since last year.
@aelaraji I've never had a use for Syncthing but I hope I get one at some point so I can see how it works. Do three-way merges work on Keepass database files?
I use KeePassXC because I really only use one device. I imagine it would be challenging to rsync the database around if I needed my passwords on more machines. It's probably fine if you're deliberate enough, but I don't think it would take long before I'd lose a password by editing an outdated version of the repository and overwriting the main copy.

I like the simple architecture of Pass, and it would indeed lend itself well to a Git repository, but I don't like that service names are visible on the filesystem. pass-tomb might mitigate this somewhat but it seems messy and I don't know if it would work with Git without compromising the security of the tomb.

What's so good about Bitwarden? Everyone seems to love it. I like that it can be self-hosted. I certainly wouldn't want a third party in control of my password database.
@prologic This seems like it would drive a wedge between Yarn.social and the people on regular old twtxt.
@prologic I use LocalMonero (onion to buy Monero with cash sent by mail. You can sell on there if you want to convert back to fiat. People also like Bisq, which is peer-to-peer software for buying and selling cryptocurrency.

To accept Monero, all you need is a wallet program. I recommend Feather Wallet. Create your wallet in there, then you'll copy the wallet files into monero-wallet-rpc for use with MoneroPay, see docker-compose.yaml.
@prologic I use LocalMonero (onion) to buy Monero with cash sent by mail. You can sell on there if you want to convert back to fiat. People also like Bisq, which is peer-to-peer software for buying and selling cryptocurrency.

To accept Monero, all you need is a wallet program. I recommend Feather Wallet. Create your wallet in there, then you'll copy the wallet files into monero-wallet-rpc for use with MoneroPay, see docker-compose.yaml.
@prologic Is it really banned? I thought the regulators just pressured the centralized exchanges to delist privacy coins without actually banning them outright.
@prologic I concur. This little community of ours is here because of you, and I'm very grateful for that. :)
@movq It's very useful. I always start my music player in a tmux session so I can SSH in, attach it, and control the music from another computer. It's also handy for letting long-running tasks on a remote machine continue in the background even if the SSH connection is broken.
@prologic Monero has stayed a little more stable than Bitcoin but it's still a cryptocurrency and it's still going to fluctuate quite a bit. It also uses proof-of-work algorithm so it still consumes quite a bit of electricity. I think the value of being able to send any amount of money, any time of the day, to anyone on the planet in 20 minutes (appears in 2 minutes, spendable in 20) **completely privately** with near-zero transaction fees exceeds the drawbacks.

Unfortunately, the characteristics that make it useful as a global currency for day-to-day transactions also make it useful for people doing illicit things. Many exchanges, fearing regulatory action, won't accept Monero for the same reason they won't accept Bitcoin from a mixer.

Monero shouldn't be banned just because people use it for bad things. It's just a tool and it can be used for good or evil. It's the same reason countries use when they ban or restrict Tor usage.
@prologic I'm in if you accept XMR
Actually, kyun.host might offer container hosting at some point.

> On-demand Linux containers.
> Run almost anything, without having to touch the command line.
> Coming Soon

https://kyun.host/services
@prologic That sounds great. The only other container-level hosting service I've heard of is PikaPods which seems much more managed than cas.run would be. It has customizable tier-based pricing and the minimum specs are 1/4 of a CPU core, 256 MB of memory, and "about 100 MB" of storage for $1/mo which seems awfully steep compared to a low-cost VPS. I don't know if PikaPods offers an IPv4 reverse proxy or not.
Monero uses cryptography to make transactions anonymous and the coins completely fungible. With most cryptocurrencies including Bitcoin, the transactions associated with an address are public and you can trace those coins all the way back to their origin. This means that not all coins are the same. For example, some exchanges won't accept Bitcoin that comes from a mixer because they assume you're doing something untoward.

With Monero, it's not possible to trace any transactions with just an address. People can't see what you're spending your money on or where your coins came from. Transaction fees using Monero are also very small. It's less than the equivalent of 1 cent in USD.

Minuscule transaction fees and anonymity make it the best choice in my opinion for buying goods and services online. Monero is much more like "digital cash" than Bitcoin, which I think is better described as "digital gold".
@prologic I might have mentioned this already but you might want to look into MoneroPay for payment processing when you get to that point with cas.run. It's a completely self-hosted backend service for receiving and tracking Monero payments and it's written in Go.
@movq You could always keep it running in a detached tmux session and attach it when you see the spike. Processes that were recently using the netwotk stay in the list for 10 or 15 seconds after they're finished so you don't have to catch it in the act.