> Your iCloud storage is almost full
Now for various reasons, I don't want my children to be using iCloud to store data, files, photos or any of the sort. They're free to use iMessages, and other Apple services like the App Store, etc, but not storage.
So I've set about blocking iCloud Storage API(s) via AdGuard Home tonight as well as ensuring that my local network (_client users_) cannot bypass DNS policies and get out other sneaky ways, because some applications will just use other DNS servers, or DOH or DOT.


The new blog for prologic.blog soon to be powered by zs using the zs-blog-template is coming along very nicely 👌 It was _actually_ pretty easy to do the migration/conversation in the end. The results are not to shabby either.
Before:
- ~50MB repo
- ~267 files
After:
- ~20MB repo
- ~88 files

- Clean layout & typography
- Chroma code highlighting (aligned to your site palette)
- Accessible copy-code button
- “On this page” collapsible TOC
- RSS, sitemap, robots
- Archives, tags, tag cloud
- Draft support (hidden from lists/feeds)
- Open Graph (OG) & Twitter card meta (default image + per-post overrides)
- Ready-to-use 404 page
As well as custom routes (_redirects, rewrites, etc_) to support canonical URLs or redirecting old URLs as well as new
zs
external command capability itself that now lets you do things like:
$ zs newpost
to help kick-start the creation of a new post with all the right "stuff"™ ready to go and then pop open your
$EEDITOR
🤞#awesome #zs
> immigration and multiculturalism
What about it? I grew up in a multicultural country.
https://zsblog.mills.io/
🤞
/posts/yyyy/mm/dd/....
was _actually_ intentional. But yeah I should figure out where to put some additional metadata on the page.
What do you mean by this? 🤔
- 2 counts of pushing and trying to get the simplest things done at work (_that for some reason are made more difficult than they should be_)
- This whole Chat Control bullshit
- And some other person things going on that have been ongoing for 72 days and counting 🤬
What I want us to do is make only a few half dozen or so lines of code changes to our clients and minimize the breaking changes and unknowns.
(Subject)
whose contents is a cryptographic content-addressable hash of the "thing"™ you're replying to and forming a chain of other replies (a thread).I'm sorry, but the simplest thing to do is to make the smallest number of changes to the Spec as possible and all agree on a "Magic Date" for which our clients use the modified function(s).
> Do the simplest thing that could work.
It's one of the age old UNIX philosphies.
Therefore, the simplest thing™ to do here is to just increase the hash length, mark a magic™ date/time as @lyse has indicated and call it a day. We'll then be fine for a few hundred years, at which point there'll be no-one left alive to give a shit™ anyway 🤣
url#timestamp
as keys.
index.md
a prehook
and a few utilities:
$ git ls-files
.gitignore
.zs/config.yml
.zs/editthispage
.zs/include
.zs/layout.html
.zs/list
.zs/months
.zs/now
.zs/onthispage
.zs/posthook
.zs/postsbymonth
.zs/prehook
.zs/scripts
.zs/styles
.zs/tagcloud
.zs/taglist
.zs/years
archives/.empty
assets/css/site.css
assets/js/main.js
index.md
posts/hello-zs-blog.md
posts/on-tagging.md
posts/second-post.md
tags/.empty
Alice starts thread #42:
2025-09-25T12:00:00Z (tno:42) Launching storage design review.
Bob replies:
2025-09-25T12:05:00Z (tno:42) (ofeed:https://alice.example/twtxt.txt
) I think compaction stalls under load.
Carol replies to Bob:
2025-09-25T12:08:00Z (tno:42) (ofeed:https://alice.example/twtxt.txt
) Token bucket sounds good.
Each origin feed numbers new threads
(tno:N)
. Replies carry both (tno:N)
and (ofeed:<origin-url>)
. Thread identity = (ofeed, tno)
.- Roots:
(tno:N)
(implicit ofeed=self
). - Replies:
(tno:N) (ofeed:<url>)
. - Clients: increment
tno
locally for new threads, copy tags on reply. - Subjects optional, not required.
...=
2025-09-25T22:41:19+10:00 Hello World
2025-09-25T22:41:19+10:00 (#kexv5vq https://example.com/twtxt.html#:~:text=2025-09-25T22:41:19%2B10:00) Hey!
Preserving both content-based addressing as well as location-based addressing and text fragment linking.
> That's kind of my position on this. If we are going to make significant changes in the threading model, let’s keep content based addressing, but also improve the user experience. Answering your question, yes I think we can do some combination of both.
I want us to preserve Content based addressing.
Let's improve the user experience and fix the hash commission problems.
1. Link rot & migrations: domain changes, path reshuffles, CDN/mirror use, or moving from txt → jsonfeed will orphan replies unless every reader implements perfect 301/410 history, which they won’t.
1. Duplication & forks: mirrors/relays produce multiple valid locations for the same post; readers see several “parents” and split the thread.
1. Verification & spam-resistance: content addressing lets you dedupe and verify you’re pointing at exactly the post you meant (hash matches bytes). Location anchors can be replayed or spoofed more easily unless you add signing and canonicalization.
1. Offline/cached reading: without the original URL being reachable, readers can’t resolve anchors; with hashes they can match against local caches/archives.
1. Ecosystem churn: all existing clients, archives, and tools that assume content-derived IDs need migrations, mapping layers, and fallback logic. Expect long-lived threads to fracture across implementations.
> With content-addressed threading, a reply points at something that’s intrinsically identified (hash of author/feed URI + timestamp + content). That ID never changes as long as the content doesn’t. Switching to location-based anchors makes the reply target extrinsic—it now depends on where the post currently lives. In a pull-based, decentralised network, locations drift. The moment they do, thread identity fragments.
yarnd
does have a well documented API and two clients (CLI and unmaintained Flutter App)
yarnd
was built over a weekend 😀