# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 196277
# self = https://watcher.sour.is?offset=172099
# next = https://watcher.sour.is?offset=172199
# prev = https://watcher.sour.is?offset=171999
(#2024-09-24T12:34:31Z) WebMentions does would work if we agreed to implement it correctly. I never figured out how yarnd's WebMentions work, so I decide to make my own, which I'm the only one using...

I had a look at WebSub, witch looks way more complex than WebMentions, and seem to need a lot more overhead. We don't need near realtime. We just need a way to notify someone that someone they don't know about mentioned or replied to their post.
(#2024-09-24T12:34:31Z) WebMentions does would work if we agreed to implement it correctly. I never figured out how yarnd's WebMentions work, so I decide to make my own, which I'm the only one using...

I had a look at WebSub, witch looks way more complex than WebMentions, and seem to need a lot more overhead. We don't need near realtime. We just need a way to notify someone that someone they don't know about mentioned or replied to their post.
WebMentions does would work if we decince to implement it correctly. I never figured out how yarnd's WebMentions work, so I decide to make my own, with I'm the only one using...

I had a look at WebSub, witch is way more complex than WebMentions, and seem to need a lot more overhead. We don't need near realtime. We just need a way to notify someone that someone they don't know about mentioned or replied to their post.
WebMentions does would work if we agreed to implement it correctly. I never figured out how yarnd's WebMentions work, so I decide to make my own, which I'm the only one using...

I had a look at WebSub, witch looks way more complex than WebMentions, and seem to need a lot more overhead. We don't need near realtime. We just need a way to notify someone that someone they don't know about mentioned or replied to their post.
WebMentions does would work if we agreed to implement it correctly. I never figured out how yarnd's WebMentions work, so I decide to make my own, with I'm the only one using...

I had a look at WebSub, witch is way more complex than WebMentions, and seem to need a lot more overhead. We don't need near realtime. We just need a way to notify someone that someone they don't know about mentioned or replied to their post.
(#2024-09-24T12:34:31Z) WebMentions does would work if we agreed to implement it correctly. I never figured out how yarnd's WebMentions work, so I decide to make my own, which I'm the only one using...

I had a look at WebSub, witch looks way more complex than WebMentions, and seem to need a lot more overhead. We don't need near realtime. We just need a way to notify someone that someone they don't know about mentioned or replied to their post.
(#2024-09-24T12:34:31Z) WebMentions does would work if we agreed to implement it correctly. I never figured out how yarnd's WebMentions work, so I decide to make my own, which I'm the only one using...

I had a look at WebSub, witch looks way more complex than WebMentions, and seem to need a lot more overhead. We don't need near realtime. We just need a way to notify someone that someone they don't know about mentioned or replied to their post.
Finally! After hours I figured out my problems.

1. The clever Go code to filter out completely read conversations got in the way with the filtering now moved into SQL. Yeah, I also did not think that this could ever conflict. But it did. Initializing the completeConversationRead flag to true got now in my way, this caused a conversation to be removed. Simply deleting all the code around that flag solved it.

2. Generation of missing conversation roots in SQL simply used the oldest (smallest) timestamp from any direct reply in the tree. To find the missing roots I grouped by subject and then aggregated using min(created_at). Now that I optimized this to only take unread messages into consideration in the first place, I do not necessarily see the smallest child anymore (when it's already read), so the timestamp is then moved forward to the next oldest unread reply. As I do not care too much about an accurate timestamp for something made up, I just adjusted my test case accordingly. Good enough for me. :-)

It's an interesting experiment with SQLite so far. I certainly did learn a few things along the way. Mission accomplished.
@lyse aha! Just like Bash would do. I figure -- is way too broad to start an autocomplete. Got to feed it a bit more! :-D
@lyse aha! Just like Bash would do. I figure -- is way too broad to start an autocomplete. Got to feed it a bit more! :-D
@lyse Haha 😝
@lyse Haha 😝
@prologic Ta! Somehow, my unit tests break, though. Running the same query manually looks like it's producing a plausible looking result, though. I do not understand it.
@david As far as I understand it, auto-completion *is* working, that's the issue. :-D Instead of spamming the terminal with bucketloads of possibilities, zsh's auto-complete is nice enough to ask whether to proceed or not.
@david Weird, I always thought that rsync automatically resumes the up- or download when aborted. But the manual indicates otherwise with --partial (-P is --partial --progress).
[47°09′05″S, 126°43′16″W] Carrier too weak
@prologic I reckon, I could just hash the subject internally to get a shorter version.
lol, this flags looks like russian name
@lyse that -P is a life saver when running rsync over spotty connections. In my very illiterate opinion, it should always be a default.
@lyse that -P is a life saver when running rsync over spotty connections. In my very illiterate opinion, it should always be a default.
@lyse Now increase the indexes on the Twt Subject form 7 bytes to 64 bytes 😈
@lyse Now increase the indexes on the Twt Subject form 7 bytes to 64 bytes 😈
@lyse Congrats 🙌
@lyse Congrats 🙌
Hmm this question has a leading "Yes" in favor of so far with 13 votes:

> Should we formally support edit and deletion requests?


Thanks y'all for voting (_it's all anonymous so I have no idea who's voted for what!_)

If you haven't already had your say, please do so here: http://polljunkie.com/poll/xdgjib/twtxt-v2 -- This is my feeble attempt at trying to ascertain the voice of the greater community with ideas of a Twtxt v2 specification (_which I'm hoping will just be an improved specification of what we largely have already built to date with some small but important improvements 🤞_)_
Hmm this question has a leading "Yes" in favor of so far with 13 votes:

> Should we formally support edit and deletion requests?


Thanks y'all for voting (_it's all anonymous so I have no idea who's voted for what!_)

If you haven't already had your say, please do so here: http://polljunkie.com/poll/xdgjib/twtxt-v2 -- This is my feeble attempt at trying to ascertain the voice of the greater community with ideas of a Twtxt v2 specification (_which I'm hoping will just be an improved specification of what we largely have already built to date with some small but important improvements 🤞_)_
Three feeds (prologic, movq and mine) and my database is already 1.3 MiB in size. Hmm. I actually got the read filter working. More on that later after polishing it.
Starting a couple of new projects (_geez where do I find the time?!_):

HomeTunnel:
> HomeTunnel is a self-hosted solution that combines secure tunneling, proxying, and automation to create your own private cloud. Utilizing Wireguard for VPN, Caddy for reverse proxying, and Traefik for service routing, HomeTunnel allows you to securely expose your home network services (such as Gitea, Poste.io, etc.) to the Internet. With seamless automation and on-demand TLS, HomeTunnel gives you the power to manage your own cloud-like environment with the control and privacy of self-hosting.

CraneOps:
> craneops is an open-source operator framework, written in Go, that allows self-hosters to automate the deployment and management of infrastructure and applications. Inspired by Kubernetes operators, CraneOps uses declarative YAML Custom Resource Definitions (CRDs) to manage Docker Swarm deployments on Proxmox VE clusters.
Starting a couple of new projects (_geez where do I find the time?!_):

HomeTunnel:
> HomeTunnel is a self-hosted solution that combines secure tunneling, proxying, and automation to create your own private cloud. Utilizing Wireguard for VPN, Caddy for reverse proxying, and Traefik for service routing, HomeTunnel allows you to securely expose your home network services (such as Gitea, Poste.io, etc.) to the Internet. With seamless automation and on-demand TLS, HomeTunnel gives you the power to manage your own cloud-like environment with the control and privacy of self-hosting.

CraneOps:
> craneops is an open-source operator framework, written in Go, that allows self-hosters to automate the deployment and management of infrastructure and applications. Inspired by Kubernetes operators, CraneOps uses declarative YAML Custom Resource Definitions (CRDs) to manage Docker Swarm deployments on Proxmox VE clusters.
@aelaraji I think all replies are missing the fact that your auto-completion isn't working. LOL. Or did I misunderstood?
@aelaraji I think all replies are missing the fact that your auto-completion isn't working. LOL. Or did I misunderstood?
@aelaraji @mckinley rsync -avzr with an optional --progress is what I always use. Ah, I could use the shorter -P, thanks @movq.
I think that's one of the worst aspects of the proposed idea of location-based addressing or identity. The fact that Alice reads Twt A and Bob reads Twt A at the same location, but Alice and Bob _could_ have in fact read very different content entirely. It is no longer possible to have consistency in a decentralised way that works properly.

One could argue this is fine, because we're so small and nothing matters, but it's a properly I rely on fairly heavily in yarnd, a properly that if lost would have significant impact on how yarnd works I think. 🤔
I think that's one of the worst aspects of the proposed idea of location-based addressing or identity. The fact that Alice reads Twt A and Bob reads Twt A at the same location, but Alice and Bob _could_ have in fact read very different content entirely. It is no longer possible to have consistency in a decentralised way that works properly.

One could argue this is fine, because we're so small and nothing matters, but it's a properly I rely on fairly heavily in yarnd, a properly that if lost would have significant impact on how yarnd works I think. 🤔
Unless I"m missing something here 🤔 But a <url> <timestamp> does not for me identify an individual Twt, it only identifies its location, which may or may not have changed since I last saw a version of it hmmm 🧐
Unless I"m missing something here 🤔 But a <url> <timestamp> does not for me identify an individual Twt, it only identifies its location, which may or may not have changed since I last saw a version of it hmmm 🧐
Also I'm not even sure I can validly cache, let alone index feeds anymore if we do this, because if the structure of a Twt is cuh that I can no longer trust that an individual Twt's content hasn't been changed at the source, what's the point of caching or indexing individual twts at all? This makes the implementations of yarnd and yarns (_the search engine, crawlers and indexer_) kind of hard to reason about.
Also I'm not even sure I can validly cache, let alone index feeds anymore if we do this, because if the structure of a Twt is cuh that I can no longer trust that an individual Twt's content hasn't been changed at the source, what's the point of caching or indexing individual twts at all? This makes the implementations of yarnd and yarns (_the search engine, crawlers and indexer_) kind of hard to reason about.
Also you're right I guess. But still that also requires the author not to change the timestamp too. Hmmm
Also you're right I guess. But still that also requires the author not to change the timestamp too. Hmmm
@movq I don't think there's any misunderstand at all. I just treat every lines in a feed as an individual entity. These are stored on their own.
@movq I don't think there's any misunderstand at all. I just treat every lines in a feed as an individual entity. These are stored on their own.
@movq So I obviously happen to agree with you as well. However in so saying, one of my goals was also to bring the simplicity of Twtxt to the Web and for the general "lay person" (_of sorts_). So I eventually found myself building yarnd. Has it been successful, well sort of, somewhat (_but that doesn't matter, I like that it's small and niche anyway_).

I agree that the goal of simplicity is a good goal to strive for, which is why I'm actually suggesting we change the Twt identifiers to be a simple SHA256 hash, something that everyone understand and has readily available tools for. I really don't think we should be doing any of this by hand to be honest. But part of the beauty of Twt Subject and Twt Hash(es) in the first place is replying by hand is much much easier because you only have a short 7 or 11 character thing to copy/paste in your reply. Switching to something like <url> <timestamp> with a space in it is going to become a lot harder to copy/paste, because you can't "double click" (_or is it triple click for some?_) to copy/paste to your clipboard/buffer now 🤣

Anyway I digress... On the whole edit thing, I'm actually find if we don't support it at all and don't build a protocol around that. I have zero issues with dropping that as an idea. Why? Because I actually think that clients should be auto-detecting edits anyway. They already can, I've PoC'd this myself, I _think_ it can be done. I haven't (yet), and one of the reasons I've not spent much effort in it is it isn't something that comes up frequently anyway.

Who cares if a thread breaks every now 'n again anyway?_
@movq So I obviously happen to agree with you as well. However in so saying, one of my goals was also to bring the simplicity of Twtxt to the Web and for the general "lay person" (_of sorts_). So I eventually found myself building yarnd. Has it been successful, well sort of, somewhat (_but that doesn't matter, I like that it's small and niche anyway_).

I agree that the goal of simplicity is a good goal to strive for, which is why I'm actually suggesting we change the Twt identifiers to be a simple SHA256 hash, something that everyone understand and has readily available tools for. I really don't think we should be doing any of this by hand to be honest. But part of the beauty of Twt Subject and Twt Hash(es) in the first place is replying by hand is much much easier because you only have a short 7 or 11 character thing to copy/paste in your reply. Switching to something like <url> <timestamp> with a space in it is going to become a lot harder to copy/paste, because you can't "double click" (_or is it triple click for some?_) to copy/paste to your clipboard/buffer now 🤣

Anyway I digress... On the whole edit thing, I'm actually find if we don't support it at all and don't build a protocol around that. I have zero issues with dropping that as an idea. Why? Because I actually think that clients should be auto-detecting edits anyway. They already can, I've PoC'd this myself, I _think_ it can be done. I haven't (yet), and one of the reasons I've not spent much effort in it is it isn't something that comes up frequently anyway.

Who cares if a thread breaks every now 'n again anyway?_
@prologic When I first started using twtxt, I was fascinated by the fact that it’s just a simple text file. This is already undermined *a lot* today by us using multiline replies and Markdown and what not. Still, I would love to retain this property of it being just a file that needs very few external tools to maintain. (Jenny is quite bloated, one might argue. One of the reasons for even *starting* the jenny project was the calculation of hashes – I was using a smaller, simpler toolchest before.)

If we were to use (replyto:…), I could just copy and paste the required info into my text editor. With echo … | sha256sum | base64 (+ the truncation step), I have to open a new terminal, make sure the tab gets copied verbatim, make sure that there’s no trailing whitespace in the content, little details like that. It *is* more effort.

This probably isn’t the best argument for (replyto:…), but it is *an* argument.

Would people do all this manually? I don’t know. Probably not. But part of the fascination with twtxt is that you *could* do it.

I’m speaking from a point of extreme minimalism here and all this isn’t strictly only related to (reply:…). It just reflects my general view on twtxt. The more additional things we build on top, the less interesting twtxt becomes (for me). My goal would be to find solutions that require *less*. Like, don’t solve edits breaking threads by *adding* another protocol, but by rethinking the whole thing, finding the root cause, and maybe come up with something that doesn’t need another building block on top.

This is all I have to say for now. 😃 I’m gonna let things cool off for a while.
@prologic When I first started using twtxt, I was fascinated by the fact that it’s just a simple text file. This is already undermined *a lot* today by us using multiline replies and Markdown and what not. Still, I would love to retain this property of it being just a file that needs very few external tools to maintain. (Jenny is quite bloated, one might argue. One of the reasons for even *starting* the jenny project was the calculation of hashes – I was using a smaller, simpler toolchest before.)

If we were to use (replyto:…), I could just copy and paste the required info into my text editor. With echo … | sha256sum | base64 (+ the truncation step), I have to open a new terminal, make sure the tab gets copied verbatim, make sure that there’s no trailing whitespace in the content, little details like that. It *is* more effort.

This probably isn’t the best argument for (replyto:…), but it is *an* argument.

Would people do all this manually? I don’t know. Probably not. But part of the fascination with twtxt is that you *could* do it.

I’m speaking from a point of extreme minimalism here and all this isn’t strictly only related to (reply:…). It just reflects my general view on twtxt. The more additional things we build on top, the less interesting twtxt becomes (for me). My goal would be to find solutions that require *less*. Like, don’t solve edits breaking threads by *adding* another protocol, but by rethinking the whole thing, finding the root cause, and maybe come up with something that doesn’t need another building block on top.

This is all I have to say for now. 😃 I’m gonna let things cool off for a while.
@prologic When I first started using twtxt, I was fascinated by the fact that it’s just a simple text file. This is already undermined *a lot* today by us using multiline replies and Markdown and what not. Still, I would love to retain this property of it being just a file that needs very few external tools to maintain. (Jenny is quite bloated, one might argue. One of the reasons for even *starting* the jenny project was the calculation of hashes – I was using a smaller, simpler toolchest before.)

If we were to use (replyto:…), I could just copy and paste the required info into my text editor. With echo … | sha256sum | base64 (+ the truncation step), I have to open a new terminal, make sure the tab gets copied verbatim, make sure that there’s no trailing whitespace in the content, little details like that. It *is* more effort.

This probably isn’t the best argument for (replyto:…), but it is *an* argument.

Would people do all this manually? I don’t know. Probably not. But part of the fascination with twtxt is that you *could* do it.

I’m speaking from a point of extreme minimalism here and all this isn’t strictly only related to (reply:…). It just reflects my general view on twtxt. The more additional things we build on top, the less interesting twtxt becomes (for me). My goal would be to find solutions that require *less*. Like, don’t solve edits breaking threads by *adding* another protocol, but by rethinking the whole thing, finding the root cause, and maybe come up with something that doesn’t need another building block on top.

This is all I have to say for now. 😃 I’m gonna let things cool off for a while.
@prologic When I first started using twtxt, I was fascinated by the fact that it’s just a simple text file. This is already undermined *a lot* today by us using multiline replies and Markdown and what not. Still, I would love to retain this property of it being just a file that needs very few external tools to maintain. (Jenny is quite bloated, one might argue. One of the reasons for even *starting* the jenny project was the calculation of hashes – I was using a smaller, simpler toolchest before.)

If we were to use (replyto:…), I could just copy and paste the required info into my text editor. With echo … | sha256sum | base64 (+ the truncation step), I have to open a new terminal, make sure the tab gets copied verbatim, make sure that there’s no trailing whitespace in the content, little details like that. It *is* more effort.

This probably isn’t the best argument for (replyto:…), but it is *an* argument.

Would people do all this manually? I don’t know. Probably not. But part of the fascination with twtxt is that you *could* do it.

I’m speaking from a point of extreme minimalism here and all this isn’t strictly only related to (reply:…). It just reflects my general view on twtxt. The more additional things we build on top, the less interesting twtxt becomes (for me). My goal would be to find solutions that require *less*. Like, don’t solve edits breaking threads by *adding* another protocol, but by rethinking the whole thing, finding the root cause, and maybe come up with something that doesn’t need another building block on top.

This is all I have to say for now. 😃 I’m gonna let things cool off for a while.
@doesnm Like maybe you need to check something, debug a client, or whatever 😅
@doesnm Like maybe you need to check something, debug a client, or whatever 😅
@prologic

> Location-based addressing is vulnerable to the content changing. If the content changes the "location" is no longer valid. This is a problem if you build systems that rely on this.

What you’re mentioning is *the primary reason*, imho, *for* location-based addressing. You’re referencing a certain entry in a feed by its timestamp and the author is free to edit it. This solves the problem of broken threads after edits. And editing “raw” twtxt files is a very natural thing to do in the twtxt world (just not in *Yarn*’s world). It’s one of the core aspects and main selling points: You just have a file that you can edit with vi or whatever, done.

If you think changing content is a *vulnerability* of location-based addressing, then I get the feeling that there’s some kind of big misunderstanding going on here. 🤔 Either on your end or on mine/ours. 🤔
@prologic

> Location-based addressing is vulnerable to the content changing. If the content changes the "location" is no longer valid. This is a problem if you build systems that rely on this.

What you’re mentioning is *the primary reason*, imho, *for* location-based addressing. You’re referencing a certain entry in a feed by its timestamp and the author is free to edit it. This solves the problem of broken threads after edits. And editing “raw” twtxt files is a very natural thing to do in the twtxt world (just not in *Yarn*’s world). It’s one of the core aspects and main selling points: You just have a file that you can edit with vi or whatever, done.

If you think changing content is a *vulnerability* of location-based addressing, then I get the feeling that there’s some kind of big misunderstanding going on here. 🤔 Either on your end or on mine/ours. 🤔
@prologic

> Location-based addressing is vulnerable to the content changing. If the content changes the "location" is no longer valid. This is a problem if you build systems that rely on this.

What you’re mentioning is *the primary reason*, imho, *for* location-based addressing. You’re referencing a certain entry in a feed by its timestamp and the author is free to edit it. This solves the problem of broken threads after edits. And editing “raw” twtxt files is a very natural thing to do in the twtxt world (just not in *Yarn*’s world). It’s one of the core aspects and main selling points: You just have a file that you can edit with vi or whatever, done.

If you think changing content is a *vulnerability* of location-based addressing, then I get the feeling that there’s some kind of big misunderstanding going on here. 🤔 Either on your end or on mine/ours. 🤔
@prologic

> Location-based addressing is vulnerable to the content changing. If the content changes the "location" is no longer valid. This is a problem if you build systems that rely on this.

What you’re mentioning is *the primary reason*, imho, *for* location-based addressing. You’re referencing a certain entry in a feed by its timestamp and the author is free to edit it. This solves the problem of broken threads after edits. And editing “raw” twtxt files is a very natural thing to do in the twtxt world (just not in *Yarn*’s world). It’s one of the core aspects and main selling points: You just have a file that you can edit with vi or whatever, done.

If you think changing content is a *vulnerability* of location-based addressing, then I get the feeling that there’s some kind of big misunderstanding going on here. 🤔 Either on your end or on mine/ours. 🤔
Sorry but i dont undestand b. New feed author? But why?
@aelaraji rsync -zaXAP is what I use all the time. But that’s all – for the rest, I have to consult the manual. 😅
@aelaraji rsync -zaXAP is what I use all the time. But that’s all – for the rest, I have to consult the manual. 😅
@aelaraji rsync -zaXAP is what I use all the time. But that’s all – for the rest, I have to consult the manual. 😅
@aelaraji rsync -zaXAP is what I use all the time. But that’s all – for the rest, I have to consult the manual. 😅
@lyse Yeah, it’s different for everone. 😅 I, for one, am not particularly interested (yet) in the underlying hardware. Logic gates and stuff like that, that’s not my kind of thing. Maybe in the future, but there’s still more than enough to explore in the world of software. 😃
@lyse Yeah, it’s different for everone. 😅 I, for one, am not particularly interested (yet) in the underlying hardware. Logic gates and stuff like that, that’s not my kind of thing. Maybe in the future, but there’s still more than enough to explore in the world of software. 😃
@lyse Yeah, it’s different for everone. 😅 I, for one, am not particularly interested (yet) in the underlying hardware. Logic gates and stuff like that, that’s not my kind of thing. Maybe in the future, but there’s still more than enough to explore in the world of software. 😃
@lyse Yeah, it’s different for everone. 😅 I, for one, am not particularly interested (yet) in the underlying hardware. Logic gates and stuff like that, that’s not my kind of thing. Maybe in the future, but there’s still more than enough to explore in the world of software. 😃
Don't forget about the upcoming Yarn.social online meetup coming up this Saturday! 😅 See #jjbnvgq for details! -- Hope to see y'all there 💪
Don't forget about the upcoming Yarn.social online meetup coming up this Saturday! 😅 See #jjbnvgq for details! -- Hope to see y'all there 💪
👋 Don't forget to take the Twtxt v2 poll 🙏 if you haven't done so already (_sorry about the confusing question at the end!_)
👋 Don't forget to take the Twtxt v2 poll 🙏 if you haven't done so already (_sorry about the confusing question at the end!_)
@doesnm I don't even advocate for reading Twtxt in its raw form in the first place, which is why I'm in favor of continuing to use content-based addressing (hashes) and incremental improve what we already have. IMO the only reason to read a Twtxt file in it's raw form is a) if you're a developer b) new feed author or c) debugging a client issue.
@doesnm I don't even advocate for reading Twtxt in its raw form in the first place, which is why I'm in favor of continuing to use content-based addressing (hashes) and incremental improve what we already have. IMO the only reason to read a Twtxt file in it's raw form is a) if you're a developer b) new feed author or c) debugging a client issue.
Aggred. But reading twtxt in raw form sounds... I can't do this
And finally the legibility of feeds when viewing them in their raw form are worsened as you go from a Twt Subject of (#abcdefg12345) to something like (https://twtxt.net/user/prologic/twtxt.txt 2024-09-22T07:51:16Z).
And finally the legibility of feeds when viewing them in their raw form are worsened as you go from a Twt Subject of (#abcdefg12345) to something like (https://twtxt.net/user/prologic/twtxt.txt 2024-09-22T07:51:16Z).
There is also a ~5x increase cost in memory utilization for any implementations or implementors that use or wish to use in-memory storage (yarnd does for example) and equally a 5x increase in on-disk storage as well. This is based on the Twt Hash going from a 13 bytes (content-addressing) to 63 bytes (on average for location-based addressing). There is roughly a ~20-150% increase in the size of individual feeds as well that needs to be taken into consideration (_on the average case_).
There is also a ~5x increase cost in memory utilization for any implementations or implementors that use or wish to use in-memory storage (yarnd does for example) and equally a 5x increase in on-disk storage as well. This is based on the Twt Hash going from a 13 bytes (content-addressing) to 63 bytes (on average for location-based addressing). There is roughly a ~20-150% increase in the size of individual feeds as well that needs to be taken into consideration (_on the average case_).
With Location-based addressing there is no way to verify that a single Twt _actaully_ came from that feed without actually fetching the feed and checking. That has the effect of always having to rely on fetching the feed and storing a copy of feeds you fetch (_which is okay_), but you're force to do this. You cannot really share individual Twts anymore really like yarnd does (_as peering_) because there is no "integrity" to the Twt identified by it's <url> <timestamp>. The identify is meaningless and is only valid as long as you can trust the location and that the location at that point hasn't changed its content.
With Location-based addressing there is no way to verify that a single Twt _actaully_ came from that feed without actually fetching the feed and checking. That has the effect of always having to rely on fetching the feed and storing a copy of feeds you fetch (_which is okay_), but you're force to do this. You cannot really share individual Twts anymore really like yarnd does (_as peering_) because there is no "integrity" to the Twt identified by it's <url> <timestamp>. The identify is meaningless and is only valid as long as you can trust the location and that the location at that point hasn't changed its content.
Location-based addressing is vulnerable to the content changing. If the content changes the "location" is no longer valid. This is a problem if you build systems that rely on this.
Location-based addressing is vulnerable to the content changing. If the content changes the "location" is no longer valid. This is a problem if you build systems that rely on this.
So really your argument is just that switching to a location-based addressing "just makes sense". Why? Without concrete pros/cons of each approach this isn't really a strong argument I'm afraid. In fact I probably need to just sit down and detail the properties of both approaches and the pros/cons of both.

I also don't really buy the argument of simplicity either personally, because I don't technically see it much more difficult to take a echo -e "<url>\t<timestamp>\t<content>" | sha256sum | base64 as the Twt Subject or concatenating the <url> <timestamp> -- The "effort" is the same. If we're going to argue that SHA256 or cryptographic hashes are "too complicated" then I'm not really sure how to support that argument.
So really your argument is just that switching to a location-based addressing "just makes sense". Why? Without concrete pros/cons of each approach this isn't really a strong argument I'm afraid. In fact I probably need to just sit down and detail the properties of both approaches and the pros/cons of both.

I also don't really buy the argument of simplicity either personally, because I don't technically see it much more difficult to take a echo -e "<url>\t<timestamp>\t<content>" | sha256sum | base64 as the Twt Subject or concatenating the <url> <timestamp> -- The "effort" is the same. If we're going to argue that SHA256 or cryptographic hashes are "too complicated" then I'm not really sure how to support that argument.
So really your argument is just that switching to a location-based addressing "just makes sense". Why? Without concrete pros/cons of each approach this isn't really a strong argument I'm afraid. In fact I probably need to just sit down and detail the properties of both approaches and the pros/cons of both.

I also don't really buy the argument of simplicity either personally, because I don't technically see it much more difficult to take a echo -e "<url>\\t<timestamp>\\t<content>" | sha256sum | base64 as the Twt Subject or concatenating the <url> <timestamp> -- The "effort" is the same. If we're going to argue that SHA256 or cryptographic hashes are "too complicated" then I'm not really sure how to support that argument.
@sorenpeter Points 2 & 3 aren't really applicable here in the discussion of the threading model really I'm afraid. WebMentions is completely orthogonal to the discussion. Further, no-one that uses Twtxt really uses WebMentions, whilst yarnd supports the use of WebMentions, it's very rarely used in practise (_if ever_) -- In fact I should just drop the feature entirely.

The use of WebSub OTOH is far more useful and is used by every single yarnd pod everywhere (_no that there's that many around these days_) to subscribe to feed updates in ~near real-time _without_ having the poll constantly.~
@sorenpeter Points 2 & 3 aren't really applicable here in the discussion of the threading model really I'm afraid. WebMentions is completely orthogonal to the discussion. Further, no-one that uses Twtxt really uses WebMentions, whilst yarnd supports the use of WebMentions, it's very rarely used in practise (_if ever_) -- In fact I should just drop the feature entirely.

The use of WebSub OTOH is far more useful and is used by every single yarnd pod everywhere (_no that there's that many around these days_) to subscribe to feed updates in ~near real-time _without_ having the poll constantly.~
Some more arguments for a local-based treading model over a content-based one:

1. The format: (#<DATE URL>) or (@<DATE URL>) both makes sense: # as prefix is for a hashtag like we allredy got with the (#twthash) and @ as prefix denotes that this is mention of a specific post in a feed, and not just the feed in general. Using either can make implementation easier, since most clients already got this kind of filtering.

2. Having something like (#<DATE URL>) will also make mentions via webmetions for twtxt easier to implement, since there is no need for looking up the #twthash. This will also make it possible to make 3th part twt-mentions services.

3. Supporting twt/webmentions will also increase discoverability as a way to know about both replies and feed mentions from feeds that you don't follow.
Some more arguments for a local-based treading model over a content-based one:

1. The format: (#<DATE URL>) or (@<DATE URL>) both makes sense: # as prefix is for a hashtag like we allredy got with the (#twthash) and @ as prefix denotes that this is mention of a specific post in a feed, and not just the feed in general. Using either can make implementation easier, since most clients already got this kind of filtering.

2. Having something like (#<DATE URL>) will also make mentions via webmetions for twtxt easier to implement, since there is no need for looking up the #twthash. This will also make it possible to make 3th part twt-mentions services.

3. Supporting twt/webmentions will also increase discoverability as a way to know about both replies and feed mentions from feeds that you don't follow.
Some more arguments for a local-based treading model over a content-based one:

1. The format: (#<DATE URL>) or (@<DATE URL>) both makes sense: # as prefix is for a hashtag like we allredy got with the (#twthash) and @ as prefix denotes that this is mention of a specific post in a feed, and not just the feed in general. Using either can make implementation easier, since most clients already got this kind of filtering.

2. Having something like (#<DATE URL>) will also make mentions via webmetions for twtxt easier to implement, since there is no need for looking up the #twthash. This will also make it possible to make 3th part twt-mentions services.

3. Supporting twt/webmentions will also increase discoverability as a way to know about both replies and feed mentions from feeds that you don't follow.
Some more arguments for a local-based treading model over a content-based one:

1. The format: (#<DATE URL>) or (@<DATE URL>) both makes sense: # as prefix is for a hashtag like we allredy got with the (#twthash) and @ as prefix denotes that this is mention of a specific post in a feed, and not just the feed in general. Using either can make implementation easier, since most clients already got this kind of filtering.

2. Having something like (#<DATE URL>) will also make mentions via webmetions for twtxt easier to implement, since there is no need for looking up the #twthash. This will also make it possible to make 3th part twt-mentions services.

3. Supporting twt/webmentions will also increase discoverability as a way to know about both replies and feed mentions from feeds that you don't follow.
@doesnm Welcome back 😅
@doesnm Welcome back 😅
Finally pubnix is alive! That's im missing? Im only reading twtxt.net timeline because twtxt-v2.sh works slowly for displaying timeline...
[47°09′52″S, 126°43′28″W] Bad satellite signal -- switching to analog communication
Pinellas County Running: 4.06 miles, 00:09:11 average pace, 00:37:21 duration

#running
Pinellas County Running: 4.06 miles, 00:09:11 average pace, 00:37:21 duration

#running
Pinellas County Running: 4.06 miles, 00:09:11 average pace, 00:37:21 duration

#running
[47°09′03″S, 126°43′42″W] Storm recedes -- back to normal work
[47°09′47″S, 126°43′53″W] Wind speed: 42kph
🧮 USERS:1 FEEDS:2 TWTS:1102 ARCHIVED:79309 CACHE:2611 FOLLOWERS:17 FOLLOWING:14
Been trying to get acquainted with rsync(1) but, whenever I Tab for completion and get this:

> λ ~/ rsync --
> zsh: do you wish to see all 484 possibilities (162 lines)?

I'm like: Nope! a scp -rpCq ... or whatever option salad will do just fine. 😅 \n~