# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 196266
# self = https://watcher.sour.is?offset=171256
# next = https://watcher.sour.is?offset=171356
# prev = https://watcher.sour.is?offset=171156
@prologic how about hashing a combination of nick/timestamp, or url/timestamp only, and not the twtxt content? On edit those will not change, so no breaking of threads. I know, I know, just adding noise here. :-P
Speaking of AI tech (_sorry!_); Just came across this really cool tool built by some engineers at Google™ (_currently completely free to use without any signup_) called NotebookLM 👌 Looks really good for summarizing and talking to document 📃
Speaking of AI tech (_sorry!_); Just came across this really cool tool built by some engineers at Google™ (_currently completely free to use without any signup_) called NotebookLM 👌 Looks really good for summarizing and talking to document 📃
@eldersnake there has to be less reliance on a single point of failure. It is not so much about creating jobs in the US (which come with it, anyway), but about the ability to produce what's needed at home too. What's the trade off? Is it going to be a little bit more expensive to manufacture, perhaps?
@eldersnake there has to be less reliance on a single point of failure. It is not so much about creating jobs in the US (which come with it, anyway), but about the ability to produce what's needed at home too. What's the trade off? Is it going to be a little bit more expensive to manufacture, perhaps?
@eldersnake Yeah I'm looking forward to that myself 🤣 It'll be great to see where technology grow to a level of maturity and efficiency where you can run the tools on your own PC or Device and use it for what, so far, I've found it to be somewhat decent at; Auto-Complete, Search and Q&A.
@eldersnake Yeah I'm looking forward to that myself 🤣 It'll be great to see where technology grow to a level of maturity and efficiency where you can run the tools on your own PC or Device and use it for what, so far, I've found it to be somewhat decent at; Auto-Complete, Search and Q&A.
@prologic That's definitely a little less depressing, when thinking of it that way 🤣 Be interesting when the hype dies down.
I'm not the biggest Apple fan around, but that is pretty awesome.
[47°09′55″S, 126°43′53″W] Dosimeter fixed
i found this site by searching "thing-fish"
@sorenpeter I really don't think we can ignore the last ~3 years and a bit of this threading model working quite well for us as a community across a very diverse set of clients and platforms. We cannot just drop something that "mostly works just fine" for the sake of "simplicity". We have to weight up all the options. There are very real benefits to using content addressing here that really IMO shouldn't be disregarded so lightly that actually provide a lot of implicit value that users of various clients just don't get to see. I'd recommend reading up on the ideas behind content addressing before simply dismissing the Twt Hash spec entirely, it wasn't even written or formalised by me, but I understand how it works quite well 😅 The guy that wrote the spec was (is?) way smarter than I was back then, probably still is now 🤣~
@sorenpeter I really don't think we can ignore the last ~3 years and a bit of this threading model working quite well for us as a community across a very diverse set of clients and platforms. We cannot just drop something that "mostly works just fine" for the sake of "simplicity". We have to weight up all the options. There are very real benefits to using content addressing here that really IMO shouldn't be disregarded so lightly that actually provide a lot of implicit value that users of various clients just don't get to see. I'd recommend reading up on the ideas behind content addressing before simply dismissing the Twt Hash spec entirely, it wasn't even written or formalised by me, but I understand how it works quite well 😅 The guy that wrote the spec was (is?) way smarter than I was back then, probably still is now 🤣~
@quark It does not. That is why I'm advocating for not using hashes for treads, but a simpler link-back scheme.
@quark It does not. That is why I'm advocating for not using hashes for treads, but a simpler link-back scheme.
@quark It does not. That is why I'm advocating for not using hashes for treads, but a simpler link-back scheme.
@quark It does not. That is why I'm advocating for not using hashes for treads, but a simpler link-back scheme.
@falsifian Right I see. Yeah maybe we want to avoid that 🤣 I do kind of tend to agree with @xuu in another thread that there isn't actually anything wrong with our use of Blake2 at all really, but we may want to consider all our options.
@falsifian Right I see. Yeah maybe we want to avoid that 🤣 I do kind of tend to agree with @xuu in another thread that there isn't actually anything wrong with our use of Blake2 at all really, but we may want to consider all our options.
@xuu I don't think this is a lextwt problem tbh. Just the Markdown aprser that yarnd currently uses. twtxt2html uses Goldmark and appears to behave better 🤣
@xuu I don't think this is a lextwt problem tbh. Just the Markdown aprser that yarnd currently uses. twtxt2html uses Goldmark and appears to behave better 🤣
@xuu Long while back, I experimented with using similarity algorithms to detect if two Twts were similar enough to be considered an "Edit".
@xuu Long while back, I experimented with using similarity algorithms to detect if two Twts were similar enough to be considered an "Edit".
Right I see what you mean @xuu -- Can you maybe come up with a fully fleshed out proposal for this? 🤔 This will help solve the problem of hash collision that result from the Twt/hash space growing larger over time without us having to change anything about the way we construct hashes in the first place. We just assume spec compliant clients will just dynamically handle this as the space grows.
Right I see what you mean @xuu -- Can you maybe come up with a fully fleshed out proposal for this? 🤔 This will help solve the problem of hash collision that result from the Twt/hash space growing larger over time without us having to change anything about the way we construct hashes in the first place. We just assume spec compliant clients will just dynamically handle this as the space grows.
@xuu I _think_ we never progressed this idea further because we weren't sure how to tell if a hash collision would occur in the first place right? In other words, how does Client A know to expand a hash vs. Client B in a 100% decentralised way? 🤔
@xuu I _think_ we never progressed this idea further because we weren't sure how to tell if a hash collision would occur in the first place right? In other words, how does Client A know to expand a hash vs. Client B in a 100% decentralised way? 🤔
Plus these so-called "LLM"(s) have a pretty good grasp of the "shape" of language, so they _appear_ to be quite intelligent or produce intelligible response (_when they're actually quite stupid really_).
Plus these so-called "LLM"(s) have a pretty good grasp of the "shape" of language, so they _appear_ to be quite intelligent or produce intelligible response (_when they're actually quite stupid really_).
@eldersnake You don't get left behind at all 🤣 It's hyped up so much, it's not even funny anymore. Basically at this point (_so far at least_) I've concluded that all this GenAI / LLM stuff is just a fancy auto-complete and indexing + search reinvented 🤣
@eldersnake You don't get left behind at all 🤣 It's hyped up so much, it's not even funny anymore. Basically at this point (_so far at least_) I've concluded that all this GenAI / LLM stuff is just a fancy auto-complete and indexing + search reinvented 🤣
[47°09′24″S, 126°43′14″W] Resetting dosimeter
Getting a little sick of AI this, AI that. Yes I'll be left behind while everyone else jumps on the latest thing, but I'm not sure I care.
[47°09′38″S, 126°43′46″W] Dosimeter malfunction
Oh. looks like its 4 chars. git show 64bf
Oh. looks like its 4 chars. git show 64bf
@prologic where was that idea?
@prologic where was that idea?
i feel like we should isolate a subset of markdown that makes sense and built it into lextwt. it already has support for links and images. maybe basic formatting bold, italic. possibly block quote and bullet lists. no tables or footnotes
i feel like we should isolate a subset of markdown that makes sense and built it into lextwt. it already has support for links and images. maybe basic formatting bold, italic. possibly block quote and bullet lists. no tables or footnotes
the stem matching is the same as how GIT does its branch hashes. i think you can stem it down to 2 or 3 sha bytes.

if a client sees someone in a yarn using a byte longer hash it can lengthen to match since it can assume that maybe the other client has a collision that it doesnt know about.
the stem matching is the same as how GIT does its branch hashes. i think you can stem it down to 2 or 3 sha bytes.

if a client sees someone in a yarn using a byte longer hash it can lengthen to match since it can assume that maybe the other client has a collision that it doesnt know about.
@prologic the basic idea was to stem the hash.. so you have a hash abcdef0123456789... any sub string of that hash after the first 6 will match. so abcdef, abcdef012, abcdef0123456 all match the same. on the case of a collision i think we decided on matching the newest since we archive off older threads anyway. the third rule was about growing the minimum hash size after some threshold of collisions were detected.
@prologic the basic idea was to stem the hash.. so you have a hash abcdef0123456789... any sub string of that hash after the first 6 will match. so abcdef, abcdef012, abcdef0123456 all match the same. on the case of a collision i think we decided on matching the newest since we archive off older threads anyway. the third rule was about growing the minimum hash size after some threshold of collisions were detected.
@prologic Wikipedia claims sha1 is vulnerable to a "chosen-prefix attack", which I gather means I can write any two twts I like, and then cause them to have the exact same sha1 hash by appending something. I guess a twt ending in random junk might look suspcious, but perhaps the junk could be worked into an image URL like screenshot. If that's not possible now maybe it will be later.

git only uses sha1 because they're stuck with it: migrating is very hard. There was an effort to move git to sha256 but I don't know its status. I think there is progress being made with Game Of Trees, a git clone that uses the same on-disk format.

I can't imagine any benefit to using sha1, except that maybe some very old software might support sha1 but not sha256.
@bender This is the different Markdown parsers being used. Goldmark vs. gomarkdown. We need to switch to Goldmark 😅
@bender This is the different Markdown parsers being used. Goldmark vs. gomarkdown. We need to switch to Goldmark 😅
@prologic yes, like they show here: https://ferengi.one/#uebsf7a
@quark i'm guessing the quotas text should've been emphasized?
@quark i'm guessing the quotas text should've been emphasized?
@slashdot NahahahahHa 🤣 So glad I don't use LinkedIn 🤦‍♂️
@slashdot NahahahahHa 🤣 So glad I don't use LinkedIn 🤦‍♂️
@falsifian No u don't sorry. But I tend to agree with you and I think if we continue to use hashes we should keep the remainder in mind as we choose truncation values of N
@falsifian No u don't sorry. But I tend to agree with you and I think if we continue to use hashes we should keep the remainder in mind as we choose truncation values of N
@falsifian Mostly because Git uses it 🤣 Known attacks that would affect our use? 🤔
@falsifian Mostly because Git uses it 🤣 Known attacks that would affect our use? 🤔
@xuu I don't recall where that discussion ended up being though?
@xuu I don't recall where that discussion ended up being though?
@bender wut da fuq?! 🤣
@bender wut da fuq?! 🤣
@xuu you mean my original idea of basically just automatically detecting Twt edits from the client side?
@xuu you mean my original idea of basically just automatically detecting Twt edits from the client side?
@xuu this is where you would need to prove that the editor delete request actually came from that feed author. Hence why integrity is much more important here.
@xuu this is where you would need to prove that the editor delete request actually came from that feed author. Hence why integrity is much more important here.
@falsifian without supporting dudes properly though you're running into GDP issues and the right to forget. 🤣 we've had pretty lengthy discussions about this in the past years ago as well, but we never came to a conclusion. We're all happy with.
@falsifian without supporting dudes properly though you're running into GDP issues and the right to forget. 🤣 we've had pretty lengthy discussions about this in the past years ago as well, but we never came to a conclusion. We're all happy with.
🧮 USERS:1 FEEDS:2 TWTS:1097 ARCHIVED:79000 CACHE:2495 FOLLOWERS:17 FOLLOWING:14
@movq it would work, you are right, however, it has drawbacks, and I think in the long term would create a new set of problems that we would also then have to solve.
@movq it would work, you are right, however, it has drawbacks, and I think in the long term would create a new set of problems that we would also then have to solve.
@david Hah 🤣
@david Hah 🤣
@prologic :-D Thanks! Things can come in cycles, right? This is simply another one. Another cycle, more personal than the other "alter egos".
@prologic :-D Thanks! Things can come in cycles, right? This is simply another one. Another cycle, more personal than the other "alter egos".
@david We'll get there soon™ 🔜
@david We'll get there soon™ 🔜
@david Hah Welcome back! 😅
@david Hah Welcome back! 😅
@aelaraji hey, hey! You are my very first reply! 👋🏻 Cheers!
@aelaraji hey, hey! You are my very first reply! 👋🏻 Cheers!
@david "Hello back" from the other corner of the world! 🫡
@david "Hello back" from the other corner of the world! 🫡
@david "Hello back" from the other corner of the world! 🫡
Incredibly upset---more than you could imagine---because I already made the first mistake, and corrected it (but twtxt.net got it on it's cache, ugh!) :'-( . Can't wait for editing to become a reality!
Incredibly upset---more than you could imagine---because I already made the first mistake, and corrected it (but twtxt.net got it on it's cache, ugh!) :'-( . Can't wait for editing to become a reality!
Alright. My first mentions---which were picked not so randomly, LOL---are @prologic, @lyse, and @movq. I am also posting my first image too, which you see below. That's my neighbourhood, in a "winter" day. Hopefully @prologic will add my domain to his allowed list, so that the image (and any other further) renders.

David's neighbourhood showing a stone sky.
Alright. My first mentions---which were picked not so randomly, LOL---are @prologic, @lyse, and @movq. I am also posting my first image too, which you see below. That's my neighbourhood, in a "winter" day. Hopefully @prologic will add my domain to his allowed list, so that the image (and any other further) renders.

David's neighbourhood showing a stone sky.
Alright, announce_me set to true. Now, who do I pick to be my first mention? Decisions, decisions. Next twtxt will have my first mention(s). :-)
Alright, announce_me set to true. Now, who do I pick to be my first mention? Decisions, decisions. Next twtxt will have my first mention(s). :-)
I have configured my twtxt.txt as simple as possible. I have setup a publish_command on jenny. Hopefully all works fine, and I am good to go. Next will be setting the announce_me to true. Here we go!
I have configured my twtxt.txt as simple as possible. I have setup a publish_command on jenny. Hopefully all works fine, and I am good to go. Next will be setting the announce_me to true. Here we go!
Everything starts at a "hello world". At least around these parts; the nerdy parts.
Everything starts at a "hello world". At least around these parts; the nerdy parts.
@sorenpeter hmm, how does your client handles "a little editing"? I am sure threads would break just as well. 😉
@sorenpeter hmm, how does your client handles "a little editing"? I am sure threads would break just as well. 😉
@prologic, there is a parser bug on parent. Specifically on this portion:


"*If twtxt/Yarn was to grow bigger, then this would become a concern again. *But even Mastodon allows editing*, so how
+much of a problem can it really be? 😅*"
@prologic, there is a parser bug on parent. Specifically on this portion:


"*If twtxt/Yarn was to grow bigger, then this would become a concern again. *But even Mastodon allows editing*, so how
+much of a problem can it really be? 😅*"
@movq going a little sideways on this, "*If twtxt/Yarn was to grow bigger, then this would become a concern again. *But even Mastodon allows editing*, so how much of a problem can it really be? 😅*", wouldn't it preparing for a potential (even if very, very, veeeeery remote) growth be a good thing? Mastodon signs all messages, keeps a history of edits, and it doesn't break threads. It isn't a problem there.😉 It is here.

I think keeping hashes is a must. If anything for that "feels good" feeling.*
@movq going a little sideways on this, "*If twtxt/Yarn was to grow bigger, then this would become a concern again. *But even Mastodon allows editing*, so how much of a problem can it really be? 😅*", wouldn't it preparing for a potential (even if very, very, veeeeery remote) growth be a good thing? Mastodon signs all messages, keeps a history of edits, and it doesn't break threads. It isn't a problem there.😉 It is here.

I think keeping hashes is a must. If anything for that "feels good" feeling.*
@movq Agreed that hashes have a benefit. I came up with a similar example where when I twted about an 11-character hash collision. Perhaps hashes could be made optional somehow. Like, you could use the "replyto" idea and then additionally put a hash somewhere if you want to lock in which version of the twt you are replying to.
There is nothing wrong with how we currently run a diff to see what has been removed. if i build a merkle tree off all the twt hashes in a feed i can use that to verify a twt should be in a feed or not. and gossip that to my peers.