# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 60921
# self = https://watcher.sour.is?uri=https://twtxt.net/user/prologic/twtxt.txt&offset=60921
# prev = https://watcher.sour.is?uri=https://twtxt.net/user/prologic/twtxt.txt&offset=60821
I disabled the compression of logs on my edge, which I'm hoping will fix the "instability" I see every now and again where my edge network just "falls off the face of the earth". Some folks don't _really_ appreciate / understand this, but Disk I/O can kill your application(s) no matter what. I/O Wait is a real thing.
@xuu Haha 🤣 I'm already have "conversations" with my junior engineers on "how to best use" and "how to avoid" 😂
@arne Yeah SSE + HTMX is basically all you need really. The whole complicated/complex JavaScript ecosystem is overkill.
@bender No I did 🤣
@bender Is dealing with spam fun though? DDoS attacks? DoS attacks? Scans for all kinds of stupid shit™? Malware? Advertising? Tracking? Spying? ..
@movq I wouldn't consider this a "dark web", no. It'd just be a new web on top of an already existing "physical" infrastructure, where the web that grew out of that is total garbage.
🤔 💭 🧐 What if, What if we built our own self-hosted / small-web / community-built/run Internet on top of the Internet using Wireguard as the underlying tech? What if we ran our own Root DNS servers? What if we set a zero tolerance policy on bots, spammers and other kind of abuse that should never have existed in the first place. Hmmmm
@movq Oh dear 😅 We're starting to see this "garbage software" too over here 👈
@bender So far so good 😊 I'll let you know how things go though!
I keep getting this email occadionally:

> Your iCloud storage is almost full

Now for various reasons, I don't want my children to be using iCloud to store data, files, photos or any of the sort. They're free to use iMessages, and other Apple services like the App Store, etc, but not storage.

So I've set about blocking iCloud Storage API(s) via AdGuard Home tonight as well as ensuring that my local network (_client users_) cannot bypass DNS policies and get out other sneaky ways, because some applications will just use other DNS servers, or DOH or DOT.
@important_dev_news Fuxk me decision makers are fuxking stupid sometimes 🤣
The people that design these bills and laws are unhinged.
@movq Good glad to hear it 😄
@movq Where do you stand on this nonsense? 🧐😆🤣
@klaxzy Fuxk yeah 🙌
@important_dev_news Thank fuxk 🤣
@lyse Cool! 😎 You _might_ be interested in my own learnings and toying around with building my own container engine / tooling (_whatever you wanna call it_) box. I had to learn a bunch of this stuff too 😅 Control Groups, Namespaces, Process Isolation, etc.
@bender See the problem is you don't live in the "busy" enough 😂 There are roaches everywhere here! 🤣 LOL snakes too! Plovers, Magpies, Crows, Spiders, even Deer for fucks sake 😂
@bender We have _quite a few_ that are basically part of our friendly neighborhood. They knew we won't chase them aware, scare them, etc. In fact some of us find little cockroaches to feed them, tose 'em up in the air and watch them sweep in and grab the little suckers 🤣
@bender Dunno 🤷
@lyse Here's my magpie 🤣
This ☝️
@movq damn! those are some fine looking chickens 😆
@zvava feeds are fetched _at least_ every 5m (_if they've changed_)
And my new migrated blog is up woohoo 🥳 https://prologic.blog/
@lyse Bahahahaha 🤣😆
@movq Same 👌
@itsericwoodward Cool! 😎
@bender Yes! What you're seeing in the demo is just demoing the routes file and redirects, etc/. Pathing more.
I _think_ I'm just about ready to go live with my new blog (_migrated from MicroPub_). I just finished migrating all of the content over, fixing up metadata, cleaning up, migrating media, optimizing media.

The new blog for prologic.blog soon to be powered by zs using the zs-blog-template is coming along very nicely 👌 It was _actually_ pretty easy to do the migration/conversation in the end. The results are not to shabby either.

Before:

- ~50MB repo
- ~267 files

After:

- ~20MB repo
- ~88 files
@movq Yeah I was gonna say 😅 The problem isn't that bad 🤣 But still we should fix this soon™ 🔜
@movq You were seeing that mayn hash collisions for you to notice this? 😱
@bender I've made several improvements today, tightened up the line height and density of the text plus a few other nice things too! I _think_ I'm ready to start migrating my blog over to this 😅
@bender I agree ! I reckon the line height could be a bit smaller 👌
Pretty happy with my zs-blog-template starter kit for creating and maintaining your own blog using zs 👌 Demo of what the starter kit looks like here -- Basic features include:

- Clean layout & typography
- Chroma code highlighting (aligned to your site palette)
- Accessible copy-code button
- “On this page” collapsible TOC
- RSS, sitemap, robots
- Archives, tags, tag cloud
- Draft support (hidden from lists/feeds)
- Open Graph (OG) & Twitter card meta (default image + per-post overrides)
- Ready-to-use 404 page

As well as custom routes (_redirects, rewrites, etc_) to support canonical URLs or redirecting old URLs as well as new zs external command capability itself that now lets you do things like:


$ zs newpost


to help kick-start the creation of a new post with all the right "stuff"™ ready to go and then pop open your $EEDITOR 🤞

#awesome #zs
@lyse Very cool! 😎
@movq Is this for your own OS? 🤔
@bender Shh yes 🤣 this is the problem with politics 😆 by that definition; I'm not conservative 🤣
@bender Yes but I guess what I'm saying is; "so what about it?" Aren't most places in the world these days "multicultural" to some degree or another? 🤔
@bender Well see that's just what the freak'n tests say about me haha 🤣

> immigration and multiculturalism

What about it? I grew up in a multicultural country.
@movq I'm glad it make sense for you 😅 I will never understand it. All I know is that I'm a conservative socialist and there's a lot of "stupid shit"™ happening in the world (_including my own country_). I still blame extreme Capitalism.
Okay @bender I _think_ I've made enough improvements now...

https://zsblog.mills.io/

🤞
@bender 🤣
I hope no-one here is a "nutter" 🤣
@movq See here's the thing... I just don't fucking gt this whole "left" vs. "right" shit™ anymore. None of it makes any sense whatsoever. When my wife tries to explain it to me it's completely the opposite to what you just said just now 😱 -- So from here on, I'm just going to keep things simple" nutters" and "normal" 🤣
@bender I feel you buddy 🤗 At one point we have quite a vibrant community. Phil was great, jlj too and Adi was well just Ado 😅
@bender Yup! Fixing that now! 👌 Also the Tags page and the size of the trags is intentional, as more posts are tagged with the same tag, those will result in larger size rendered tags in a kind of "tag cloud" -- At this this is the intention.
@bender Ahh yes I see what you mean. no indicate of when the post was made right? That should be ideally displayed on the page somewhere? Would you expect it in the url as well, because not having /posts/yyyy/mm/dd/.... was _actually_ intentional. But yeah I should figure out where to put some additional metadata on the page.
@bender hopeful of the same 🤞
I will try to improve the CSS 🙏
> the single posts have no date (intended?)

What do you mean by this? 🤔
@movq Kill it with fire 🔥
https://zsblog.mills.io/ for anyone interested. I _think_ I still have some small tweaking to do befor eI use this for realz.
@alexonit Yeah I think we're overstating the UNIX principles a bit here 🤣 I get what you're trying to say though @zvava 😅 If I could go back in time and do it all over again, I would have gotten the Hash length correct and I _would_ have used SHA-256 instead. But someone way smarter than me designed the Twt Hash spec, we adopted it and well here we are today, it works™ 😅
@alexonit Yes well I'm pretty big on self-hosting. I've even tried to start a small business/company around it (_but that's another story for another day!_) -- Meanwhile I would encourage you to have a look at the work we've done in Salty.im 👌
@alexonit Well we have to really use the same spec or threading doesn't really work in a truly decentralized manner 😉
@zvava I axtually latest did and I wasn't the only one 🤣
@zvava That's what I'm leaning towards yeah🤞
@zvava Haha 🤣
Please don't hate me today; I'm a bit grumpy and have too many reasons to be upset:

- 2 counts of pushing and trying to get the simplest things done at work (_that for some reason are made more difficult than they should be_)
- This whole Chat Control bullshit
- And some other person things going on that have been ongoing for 72 days and counting 🤬
And I need to make something absolutely clear as well here. Twtxt was completely and utterly dead back in {Aug 2020](https://yarn.social/about.html) when I came across the spec and its simplicity and realised the lost opportunity. Since then we've continued to grow a small but thriving community. The extensions we've built over time have stood and lasted the test of time for the past ~5 years. We need not break things too badly, because what we have today and was designed years ago _actually_ works quite well™ (_despite some flaws_).~
Put another way, what you are proposing/pushing for requires hundreds of lines of code to change across a half dozen or so clients and lots of breaking changes, not to mention unknowns.

What I want us to do is make only a few half dozen or so lines of code changes to our clients and minimize the breaking changes and unknowns.
@zvava Going to have to hard disagree here I'm sorry. a) no-one reads the raw/plain twtxt.txt files, the only time you do is to debug something, or have a stick beak at the comments which most clients will strip out and ignore and b) I'm sorry you've completely lost me! I'm old enough to pre-date before Linux became popular, so I'm not sure what UNIX principles you think are being broken or violated by having a Twt Subject (Subject) whose contents is a cryptographic content-addressable hash of the "thing"™ you're replying to and forming a chain of other replies (a thread).

I'm sorry, but the simplest thing to do is to make the smallest number of changes to the Spec as possible and all agree on a "Magic Date" for which our clients use the modified function(s).
@zvava Going to have to hard disagree here I'm sorry. a) no-one reads the raw/plain twtxt.txt files, the only time you do is to debug something, or have a stick beak at the comments which most clients will strip out and ignore and b) I'm sorry you've completely lost me! I'm old enough to pre-date before Linux became popular, so I'm not sure what UNIX principles you think are being broken or violat
@bender Well honestly, this is just it. My strong position on this is quite simple:

> Do the simplest thing that could work.

It's one of the age old UNIX philosphies.

Therefore, the simplest thing™ to do here is to just increase the hash length, mark a magic™ date/time as @lyse has indicated and call it a day. We'll then be fine for a few hundred years, at which point there'll be no-one left alive to give a shit™ anyway 🤣
@alexonit My problem is I don't see a world where we don't employ some form of cryptography to use as keys for threads in databases and other such things honestly. I'm not going to use url#timestamp as keys.
Oh man, if the EU _actually_ rolled out this horribd idea called ChatControl that _actually_ threatens the security and privacy of secure e2e encrypted messaging like Signal™, fuck me, I'm out 🤦‍♂️ I'll just rage quit the IT industry and become a luddite. I'm out.
@bender Yes I did about a week or so ago. It took me a lot of effort to get the content even rendered in the first place. LOL I had to basically export my blog as HTML (_can you believe that?!_) -- The Hugo export just didn't work at all 🤣
I just created a zs blogging template which I'm going to use for https://prologic.blog and I _might_ starting writing long-form again soon™ 🔜 So far the "blogging" template/engine (_if you weill_) is quite simple. It comprises essentially of an index.md a prehook and a few utilities:


$ git ls-files
.gitignore
.zs/config.yml
.zs/editthispage
.zs/include
.zs/layout.html
.zs/list
.zs/months
.zs/now
.zs/onthispage
.zs/posthook
.zs/postsbymonth
.zs/prehook
.zs/scripts
.zs/styles
.zs/tagcloud
.zs/taglist
.zs/years
archives/.empty
assets/css/site.css
assets/js/main.js
index.md
posts/hello-zs-blog.md
posts/on-tagging.md
posts/second-post.md
tags/.empty
@movq Yes it's kind of terrible 😞 -- Let's not do this 🤣
@bender Really? 🤔
This is possibly the only other threading model I can come up with for Twtxt that I think I can get behind.
Example:

Alice starts thread #42:


2025-09-25T12:00:00Z (tno:42) Launching storage design review.


Bob replies:


2025-09-25T12:05:00Z (tno:42) (ofeed:https://alice.example/twtxt.txt
) I think compaction stalls under load.


Carol replies to Bob:


2025-09-25T12:08:00Z (tno:42) (ofeed:https://alice.example/twtxt.txt
) Token bucket sounds good.
TNO Threading (draft):
Each origin feed numbers new threads (tno:N). Replies carry both (tno:N) and (ofeed:<origin-url>). Thread identity = (ofeed, tno).

- Roots: (tno:N) (implicit ofeed=self).
- Replies: (tno:N) (ofeed:<url>).
- Clients: increment tno locally for new threads, copy tags on reply.
- Subjects optional, not required.

...=
@itsericwoodward I'm glad to hear it 🤣
Of course we still have to fix the hashing algorithm and length.
I would personally rather see something like this:


2025-09-25T22:41:19+10:00	Hello World
2025-09-25T22:41:19+10:00	(#kexv5vq https://example.com/twtxt.html#:~:text=2025-09-25T22:41:19%2B10:00) Hey!


Preserving both content-based addressing as well as location-based addressing and text fragment linking.
I was trying to say (_badly_):

> That's kind of my position on this. If we are going to make significant changes in the threading model, let’s keep content based addressing, but also improve the user experience. Answering your question, yes I think we can do some combination of both.
@alexonit Holy fuck! 🤣 I just realized how bad my typing was in my reply before 🤣 🤦‍♂️ So sorry about that haha 😆 I blame the stupid iPhone on-screen keyboard ⌨️
@alexonit Yhays kind of love you!! Stance and position on this. If we are going to make chicken changes in the threading model, let's keep content based addressing, but also improve the use of experience. So in fact, in order to answer your question, I think yes, we can do some kind of combination of both.
@lyse I don't think there's any point in continuing the discussion of Location vs. Content based addressing.

I want us to preserve Content based addressing.

Let's improve the user experience and fix the hash commission problems.
Here is just a small list of things™ that I'm aware will break, some quite badly, others in minor ways:

1. Link rot & migrations: domain changes, path reshuffles, CDN/mirror use, or moving from txt → jsonfeed will orphan replies unless every reader implements perfect 301/410 history, which they won’t.
1. Duplication & forks: mirrors/relays produce multiple valid locations for the same post; readers see several “parents” and split the thread.
1. Verification & spam-resistance: content addressing lets you dedupe and verify you’re pointing at exactly the post you meant (hash matches bytes). Location anchors can be replayed or spoofed more easily unless you add signing and canonicalization.
1. Offline/cached reading: without the original URL being reachable, readers can’t resolve anchors; with hashes they can match against local caches/archives.
1. Ecosystem churn: all existing clients, archives, and tools that assume content-derived IDs need migrations, mapping layers, and fallback logic. Expect long-lived threads to fracture across implementations.
We've been discussing the idea of changing the threading model from Content-based Addressing to Location-based addressing for years now. The problem is quite complex, but I feel I have to keep reminding y'all of the potential perils of changing this and the pros/cons of each model:

> With content-addressed threading, a reply points at something that’s intrinsically identified (hash of author/feed URI + timestamp + content). That ID never changes as long as the content doesn’t. Switching to location-based anchors makes the reply target extrinsic—it now depends on where the post currently lives. In a pull-based, decentralised network, locations drift. The moment they do, thread identity fragments.
@kat Mine shows 1/1 of 14 Twts 😆 I think this is a bug 🤯
@alexonit I took it down mostly because of continued abuse and spam:l. I intend to fix I and improve the drive and its sister at Summer point 🤞
@alexonit Love this 😍
@alexonit Yeah same 🤣 There's also this @news-minimalist feed that shows up the most important shit™ anyway (_when/if that happens_).
@bender Seriously I have zero clue 🤣 I don't read or watch any news so I have no idea 🤦‍♂️
Did something bad happen in the world today? 🧐
Hello 👋 I'm back!
@bender Soon soon🤣
@bender I wish 🤣 Nah work on-site thingy😆
I'm out of town folks and away until tomorrow (have been all week)
@thecanine Id like that too, it just can't come from me, because native mobile dev just isn't my thing 😢
@zvava And yes yarnd does have a well documented API and two clients (CLI and unmaintained Flutter App)
@zvava We can do that 👌
@zvava The first version of what is now yarnd was built over a weekend 😀
@zvava Herw you go: https://git.mills.io/yarnsocial/twtxt.dev/pulls/28