# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 2172
# self = https://watcher.sour.is?uri=https://twtxt.net/user/mckinley/twtxt.txt&offset=1572
# next = https://watcher.sour.is?uri=https://twtxt.net/user/mckinley/twtxt.txt&offset=1672
# prev = https://watcher.sour.is?uri=https://twtxt.net/user/mckinley/twtxt.txt&offset=1472
@prologic I believe you mentioned me here because of my twt from earlier (#pfmeyva) and I wanted to clarify my position.

> your fears/worries about the “growth” may suddenly just hit us hard

I'm not afraid of the network growing, I'm actually very excited to see it grow. My concern was with keeping *my* real-life and online identities separate.
@prologic I believe you mentioned me here because of my twt from earlier (#pfmeyva) and I wanted to clarify my position.

> your fears/worries about the “growth” may suddenly just hit us hard

I'm not afraid of the network growing, I'm actually very excited to see it grow. My concern was with keeping *my* real-life and online identities separate.
@prologic I think this is great. I'm excited to see the network grow because I believe in twtxt as an alternative to Twitter and the rest of them.

My concern from that thread was about mixing public and private identities because the network is still quite small.
@prologic Yes, they said they intend to do so, but it doesn't matter what they say.

Proprietary software claiming to “protect your privacy” cannot and should not be trusted.
@prologic The mobile browsers are both free software, but the Mac OS browser is currently proprietary.

> We plan to open source our Mac app after the beta period, like we’ve done for our iOS & Android app, and many of our built-in privacy protections are already open sourced.

https://spreadprivacy.com/introducing-duckduckgo-for-mac/
@carsten

> why not? What is wrong about that browser?

It's proprietary and DuckDuckGo has had a sketchy past.
@darch There's an atom feed: https://mckinley.cc/notes/atom.xml

@prologic, I announced it in the beginning on my main feed but I haven't been announcing each individual post. I think I will from now on.
@prologic Twtxt is anti-social social media.
I don't want to bring people here, at least those I know in real life, because I try to separate my real identity from my online identity.

This will change when the network grows bigger and there's a larger anonymity set, for lack of a better term.

Like @lyse said, this is an extremely selfish reason, but it is my reason.
Are you guys aware of the notes section of my website? Should I announce new notes here like I do with blog posts?
@justamoment What about a Mumble server?
@eaplmx That's awesome! Is it just a page generator like mine or does it have its own Web server?

Coincidentally, my time table generator was the first useful thing I wrote in C.
Another great chat with @prologic and @ocdtrekkie tonight.

Some things we talked about:

* Time zones and DST
* Mastodon and scalability
* E-mail and decentralization
* Twitter and Elon
* New twtxt feeds popping up since the bird was freed

Also, @prologic said he's not interested in ActivityPub integration for Yarn.social*
@prologic That works for me, I can't make it right now.
@markwylde If I recall, the twt retention is time-based and it can be changed by the operator of the pod.

As for @prologic's feed, it's using the Archive Feeds extension to cut down on file size.
@markwylde If I recall, the twt retention on Yarn is time-based and it can be changed by the operator of the pod.

As for @prologic's feed, it's using the Archive Feeds extension to cut down on file size.
@carsten It's just a little C program and all the offsets are hard-coded, nothing fancy.
For next week, what about Saturday 21:00 or 22:00 UTC? @prologic and @darch, could that work? You two would have the earliest and latest local times, respectively.

It would be midday here in the US, but I have the day off on Saturday. Not sure about @ocdtrekkie.
What about Saturday 21:00 or 22:00 UTC? @prologic and @darch, could that work? You two would have the earliest and latest local times, respectively.

It would be midday here in the US, but I have the day off on Saturday. Not sure about @ocdtrekkie.
I feel like it's even harder to find a good time for everyone now that daylight savings is over. What are we doing today?
@eaplmx Thank you, but it's really nothing special. Just a C program (formerly a shell script) and all the offsets are hard-coded. It does the job, though.
@prologic You thought that force push to yarnsocial/yarn would go unnoticed, didn't you? :)
@prologic You thought that force push to yarnsocial/yarn would go unnoticed, didn't you?
@movq As far as I can tell, the duplicated effort is lessened by using an intermediate library like wlroots.

> Pluggable, composable, unopinionated modules for building a Wayland compositor; or about 60,000 lines of code you were going to write anyway.

I agree; the lack of hackability in Wayland is very unfortunate.
@prologic 00:00 UTC is a little early for me. 02:00 up to about 07:00 would work. US and Europe are off of daylight savings now, so I updated my time table.
@lyse

> Unfortunately, the reasoning behind rel="self" remains a mystery.

I don't know where it would be useful, either. I have one in each of my Atom feeds for the same reason. It's a Chesterton's Fence situation for me. It's not doing any harm, and the W3C says it should be there, so I put it there.=
@lyse Sorry... :)

I'll use proper syntax from now on.
@lyse

> I always want my URL also to be my ID, so I have to duplicate that – unnecessarily in my opinion.

Interesting. I understand what you're saying, but I find the duplicated functionality of the RSS <guid> to be confusing. I think it would make more sense if the roles were reversed: use the <link> as the ID if there's no <guid> and never use the <guid> as a permalink. Bonus points if a <guid> is required if a <link> is not present.

Actually, when doing research for this post, I stumbled upon this blog post: How to make a good ID in Atom

I have no idea how I ended up at a blog post from 2004 that hasn't been online since 2011, but I did. Regardless, he has a point. I had never heard of tag URIs before. I think I'll start using them.
@marado As far as I can tell, the spec only provides relative link resolution through the xml:base attribute.

> Any element defined by this specification MAY have an xml:base attribute...When xml:base is used in an Atom Document, it [establishes][establishes=] the base URI (or IRI) for resolving any relative references found within the effective scope of the xml:base attribute.

This is all it has to say on rel="self":

> The value "self" signifies that the IRI in the value of the href attribute identifies a resource equivalent to the containing element.=
@marado As far as I can tell, the spec only provides relative link resolution through the xml:base attribute.

> Any element defined by this specification MAY have an xml:base attribute...When xml:base is used in an Atom Document, it [establishes] the base URI (or IRI) for resolving any relative references found within the effective scope of the xml:base attribute.

This is all it has to say on rel="self":

> The value "self" signifies that the IRI in the value of the href attribute identifies a resource equivalent to the containing element.
@marado As far as I can tell, the spec only provides relative link resolution through the xml:base attribute.

> Any element defined by this specification MAY have an xml:base attribute...When xml:base is used in an Atom Document, it [establishes][establishes][establishes][establishes][establishes][establishes][establishes][establishes] the base URI (or IRI) for resolving any relative references found within the effective scope of the xml:base attribute.

This is all it has to say on rel="self":

> The value "self" signifies that the IRI in the value of the href attribute identifies a resource equivalent to the containing element.
@marado As far as I can tell, the spec only provides relative link resolution through the xml:base attribute.

> Any element defined by this specification MAY have an xml:base attribute...When xml:base is used in an Atom Document, it \n\n\n\n\n\n\n\n the base URI (or IRI) for resolving any relative references found within the effective scope of the xml:base attribute.

This is all it has to say on rel="self":

> The value "self" signifies that the IRI in the value of the href attribute identifies a resource equivalent to the containing element.
@marado As far as I can tell, the spec only provides relative link resolution through the xml:base attribute.

> Any element defined by this specification MAY have an xml:base attribute...When xml:base is used in an Atom Document, it [establishes][establishes=][establishes][establishes=][establishes][establishes=][establishes][establishes=] the base URI (or IRI) for resolving any relative references found within the effective scope of the xml:base attribute.

This is all it has to say on rel="self":

> The value "self" signifies that the IRI in the value of the href attribute identifies a resource equivalent to the containing element.
@marado As far as I can tell, the spec only provides relative link resolution through the xml:base attribute.

> Any element defined by this specification MAY have an xml:base attribute...When xml:base is used in an Atom Document, it \n\n\n\n the base URI (or IRI) for resolving any relative references found within the effective scope of the xml:base attribute.

This is all it has to say on rel="self":

> The value "self" signifies that the IRI in the value of the href attribute identifies a resource equivalent to the containing element.
@marado As far as I can tell, the spec only provides relative link resolution through the xml:base attribute.

> Any element defined by this specification MAY have an xml:base attribute...When xml:base is used in an Atom Document, it [establishes]\n[establishes]\n[establishes]\n[establishes]\n the base URI (or IRI) for resolving any relative references found within the effective scope of the xml:base attribute.

This is all it has to say on rel="self":

> The value "self" signifies that the IRI in the value of the href attribute identifies a resource equivalent to the containing element.
@marado As far as I can tell, the spec only provides relative link resolution through the xml:base attribute.

> Any element defined by this specification MAY have an xml:base attribute...When xml:base is used in an Atom Document, it \n\n the base URI (or IRI) for resolving any relative references found within the effective scope of the xml:base attribute.

This is all it has to say on rel="self":

> The value "self" signifies that the IRI in the value of the href attribute identifies a resource equivalent to the containing element.
@prologic Also:

> A new sidebar design in System Settings — instantly familiar to iPhone and iPad users — makes it easier than ever to navigate settings and configure your Mac.

Let me fix that.

> We moved all the settings because we're trying to make our desktop operating system as terrible as our mobile operating system. Good luck trying to find anything, cattle!

Alright, I promise I'm done now.
@prologic

I skimmed through that page, and one thing stood out to me above everything else.

> Simply bring iPhone close to your Mac and it automatically switches to iPhone as the camera input. And it works wirelessly, so there’s nothing to plug in.

By default? That seems like extremely undesirable behavior.
@akoizumi What is that?
@prologic I'd like to write about twtxt at some point but I'm not very familiar with Mastodon and ActivityPub. Maybe you should write that one :)
@prologic Why not just have a Raspberry Pi or something with an external hard drive holding all your Linux ISOs, and connect that directly to the display? Then, it could run Kodi or Jellyfin on that display. It should also be able to start up a Wi-Fi network for streaming on other devices.

Plex might be able to do this, but I'm not sure how much they insist on handling authentication on their servers.
@prologic All I can say is that it's starting to get really difficult to read *every twt* in the Discover feed.
Atom vs. RSS: https://mckinley.cc/blog/20221109.html

cc @movq @lyse @nmke-de

It only took me 5 days :)
@marado Well said!
@prologic Here's the list and the relevant lines in .bashrc are just


alias theo='shuf -n 1 ~/Documents/theo.txt'
echo "<theo> $(theo)"


I didn't make the list myself and I can't remember where I found it.


Who do you work for?  Governments?
@prologic Here's the list and the relevant lines in .bashrc are just


alias theo='shuf -n 1 ~/Documents/theo.txt'
echo "<theo> $(theo)"
I didn't make the list myself and I can't remember where I found it.
I also have a shell alias.


[mckinley@t430 ~]$ theo
Your emails only contain opinions.
@prologic Of course it is, it's Theo. I have a list of snarky responses of his from the mailing list and every time I open a terminal it prints a random one.


<theo> Come on guys.  Don't have me OK this.
[mckinley@t430 ~]$
@nmke-de I've been wanting to write about this in a formal manner. I'll try to get a post out about it today on mckinley.cc.

Spoiler: Atom is better.
@carsten Proprietary software claiming to "protect your privacy" cannot and should not be trusted.
@prologic You should probably get that checked out...
@prologic You should probably get yourself checked out...
I was the king of Wii Sports Resort table tennis and I'm good at air hockey. I've never tried real table tennis, but I think I could be pretty good. :)
@prologic No problem, good luck today man.
@lyse I already launch MPV directly from Newsboat. If you pass a URL at the command line, it will use stream it with yt-dlp.

After I update my feeds, I do a lot of manual filtering, marking videos I don't want to watch as read. This would stop the cache system from downloading or storing videos I don't want to watch.

> This program, that downloads all required videos found in Newsboat’s SQLite database and removes them once marked read, that would be a cronjob? No user interaction required, did I get this right?

It could definitely be a cron job, but I think I'd rather have it as a hotkey in my window manager. That way, I could run a video cache update after I'm done marking unwanted videos as read. Other than that, there would be no interaction required.
@lyse That was a very interesting read. It's fun to compare my setup to others.

I'll bet it's nice to have an offline copy of your videos. I've been thinking about a program that interfaces with Newsboat's database directly, getting the URLs of all unread articles with a specific tag.

When it runs, it would queue new videos for download with yt-dlp and delete any videos in the cache directory that I've marked read in Newsboat. From there, I'd just need a wrapper script to look for the video in the local cache before letting MPV stream it.

I was working on a prototype of this system a few weeks ago, but my implementation was just too hacky and I got sidetracked.

Also, what program are you using for the syntax highlighting in the article? I've been thinking of doing that for my site.
@eaplmx Dark, unless I'm reading a long article or some technical documentation. It's easier for me to read dark-on-light.

If it's anything really long like a book, I'd prefer a printed copy.
@iolfree As soon as they let the sink in, you could go to https://twitter.com/ and browse without being signed in. Previously, it was just a login page.

* 2022-10-25: https://archive.ph/q7b4J
* 2022-10-29: https://archive.ph/RoFt2

I think Nitter is fine. It has a ton of public instances and a relatively active development community. The number of public instances show that there's a lot of demand.
@abucci Fascinating stuff. What is this simulation primarily used for?
@lyse Oh, nice! I'll check it out later, looks like an interesting read.
@lyse You're right, I copied the last note for the boilerplate and I forgot to change the date. It's fixed now.

The Last-Modified header probably accounts for the time it took in between setting the timestamp in the Atom feed and pushing the changes to the Web server.

@carsten, what's wrong with the RSS feed?
@prologic Sounds extremely frustrating. Is there any weird corporate spyware on there?
@eaplmx Yes, there's a organization-wide feed at https://git.mills.io/yarnsocial.rss. The Gitea RSS integration is totally broken, but it at least lets me know when there are updates.

There's a repository feed at https://git.mills.io/yarnsocial/yarn.rss but it's even *more* broken.
@cobra It's crazy, isn't it?
@eaplmx I keep seeing your name pop up in the RSS feed. Good work, man.

@abucci Interesting. Can you tell us more?
This can be mitigated under normal circumstances by assigning branches to the dangling commits before they're removed by Git's garbage collection.

However, this sort of malicious forced push can still cause a lot of damage, some of which can be very difficult to repair. It's better for archival purposes to make a full backup and then pull in the updates. A human can sort it out from there.
It's a beautiful fall afternoon and I have the day off. What are you all working on today? I've been working on a script that pulls in updates for a number of Git repositories at once in order to keep an updated local archive of them.

Today, I'm making it resilient against the maintainer force-pushing an empty branch in an attempt to foil archives. There's still some more work to do, but I just ran a successful test.

The complete history of the repository is backed up in the bundle before the evil maintainer's force push is brought in.

Output of my Git script when detecting a malicious forced push
@eaplmx Thanks for letting me know. It's easy to forget about topping up your account when you're paying 1 cent per day. :)
@eaplmx Thanks for letting me know. It's easy to forget about topping up your account when you're paying 1 cent per day. :)

It's back up now.
@abucci Welcome to the Walled Garden.
@abucci Welcome to the Walled Garden.
@prologic It's not something I feel strongly about at all, I was just using it as an example. I like Gron a lot, actually.
Also, because it's so annoying to manage dependencies with C and C++, there are often flags you can set to disable functionality related to a dependency if you don't need it.

Gron has no such option. Apparently there is no reason why you *wouldn't* want a text processing program to make network requests.
As a user of programs, it makes me groan to see a program written in anything but C or C++. In just about every other language, it's too easy to manage dependencies, and two problems arise.

1. Microdependencies
2. Feature creep because you can do *x* in 3 lines of code by adding this giant dependency. (Why does gron need HTTP download support?)
I'm sure there are a lot of old accounts you could delete that have never made any contributions, but that information isn't trivial to get from the API endpoints I can access.
I'm sure there are a lot of old accounts you could delete that have never made any contributions, but that information isn't trivial to get from the API endpoints to which I have access.
@prologic There's no script, it was mostly a manual process. I used jq, gron, grep, and awk to present the information in a reasonable way, then manually checked any accounts that looked suspicious. I looked at user descriptions, user URLs, and repositories.

It wasn't difficult to go through the data by hand after it was filtered a bit.

There are 195 registered users, only a handful of which have specified a description or URL.

There are 203 non-fork repositories, but only 27 of them are owned by entities other than prologic, yarnsocial, and saltyim. That prologic guy alone accounts for 152 of them.
Hey @prologic, I wanted to learn a bit of jq so I went hunting for spam accounts on git.mills.io using data from the API. Here are the results. I thought I'd find more than 11.
@prologic I tried out your mirror utility. It's a great start, but I ran into some issues.

1. It's creating the directory tree as it should, but the assets are incorrectly placed in the same directory as the document
2. The paths in the document should be rewritten to be relative instead of absolute
3. It respects robots.txt and there is no way to turn it off (I had to delete the file on my machine to make the tool work)
4. <link rel="canonical"> isn't a page requisite, but the tool downloads the file at the specified URL anyway. (It doesn't respect robots.txt when doing this)

https://ttm.sh/0Pb.txt demonstrates the problems with the directory tree.
@eaplmx There's always Usenet :)
On an unrelated note, I just thought of a great idea for a twtxt bot. :)
@ocdtrekkie I am part of the 81%.
@prologic


$ curl https://raw.githubusercontent.com/answerdev/answer/main/go.mod | grep -c '^	'
84
$ curl https://raw.githubusercontent.com/answerdev/answer/main/ui/package.json | gron | grep -c '[Dd]ependencies[\[\.]'
76


No thanks...
@prologic


$ curl https://raw.githubusercontent.com/answerdev/answer/main/go.mod | grep -c '^\t'
84
$ curl https://raw.githubusercontent.com/answerdev/answer/main/ui/package.json | gron | grep -c '[Dd]ependencies[\\[\\.]'
76


No thanks...
@prologic


$ curl https://raw.githubusercontent.com/answerdev/answer/main/go.mod | grep '^\t' | wc -l
84
$ curl https://raw.githubusercontent.com/answerdev/answer/main/ui/package.json | gron | grep '[Dd]ependencies[\\[\\.]' | wc -l
76


No thanks...
I also use doas btw
@prologic

> "Delete and Redirect"

That's a great idea.
@abucci This is a very important ongoing discussion that must be had. I'm glad we all agree more than we disagree.
@prologic

> I honestly think the best way to handle this as we grow/scale with more pods in the Yarn.social network is to just build up a strong positive community and just have a “zero tolerance” attitude towards abuse and just nuke offending feeds/accounts without question.

I personally disagree with this moderation policy, but it's your right to enforce your rules as you see fit.

Perhaps there should be some kind of grace period, where anyone can download the contents of a "deleted" feed for *n* days before it gets removed entirely. That way, the user has the opportunity to download his feed and move it somewhere else and his followers have the opportunity to save an archive of the feed if they so choose.
@prologic

> I honestly think the best way to handle this as we grow/scale with more pods in the Yarn.social network is to just build up a strong positive community and just have a “zero tolerance” attitude towards abuse and just nuke offending feeds/accounts without question.

I personally disagree with this moderation policy, but it's your right to enforce your rules as you see fit.

Perhaps there should be some kind of grace period, where anyone can download the contents of a "deleted" feed for *n* days before it gets removed entirely. That way, the user has the opportunity to download his feed and move it somewhere else.
@prologic It seems very straight forward to do this automatically. When I delete my own post, how is that currently propagated to other pods?

We agree that the abuse of admin powers on his own pod is not a big concern because the software should protect the right of the users to migrate to a different pod.
@prologic It seems very straight forward to do automatically. We agree that the abuse of admin powers on his own pod is not a big concern because the software should protect the right of the users to migrate to a different pod.
@prologic To be clear, you agree with me, but you say my understanding is correct? The understanding that the delete API is about policing the activities of users on other pods?
@abucci I agree. It should be as easy as possible to migrate a feed between pods.

A user of a pod certainly needs to trust the operator to some extent. Abuse of admin powers by the operator of your own pod isn't a big concern in theory because it can be countered by moving to a different pod or self-hosting your feed.

However, admin abuse is a real concern when the admin of a pod you're not using gets to police the content of your posts.
@abucci

> Imagine if a pod operator decides a twt should be deleted, then this set off delete calls for that twt to all peered pods, which in turn propagate delete calls.

Fine, as long as the post is on *his own pod*. I don't think we need any kind of moderation on Pod A by the admin of Pod B. If a function like that is going to exist, it should at least be opt-in.
@prologic If I was to run a pod, and I'd like to spin one up at some point, the abuse policy of twtxt.net (or any other pod) would be completely irrelevant. My users would be bound by the abuse policy of my pod, whether or my abuse policy matches yours.

Users on any pod should be free to mute any feed or conversation they dislike, and pod admins should be free to "mute" entire pods if they so choose.

I may be misunderstanding you here, but the motive behind this delete API seems to be to police the activities of users on other pods.

If a pod admin decides to delete a post on his pod as you have, that deletion should eventually be propagated throughout the network. It's the same if a user chooses to delete his own post. However, you should not have any say over the deletion of a post on twt.nfld.uk. That's @jlj's decision to make.

I think @tkanos and I are in agreement here, but I don't want to speak for him.
@prologic Is this about moderation or situations in which a user chooses to delete his or her own post?
https://github.com/denilsonsa/gimp-palettes/blob/master/palettes/Pantone.gpl
https://web.archive.org/web/20110318135429/http://www.sandaleo.com:80/pantone.asp