# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 15
# self = https://watcher.sour.is/conv/lryyjla
@prologic the real conclusion is, is it going to change, to what, and when? :-P
@prologic the real conclusion is, is it going to change, to what, and when? :-P
@quark Bloody good question 🙋 God only knows 🤣
@quark Bloody good question 🙋 God only knows 🤣
@quark My money is on a SHA1SUM hash encoding to keep things much simpler:


$ echo -n "https://twtxt.net/user/prologic/twtxt.txt\n2020-07-18T12:39:52Z\nHello World! 😊" | sha1sum | head -c 11
87fd9b0ae4e
@quark My money is on a SHA1SUM hash encoding to keep things much simpler:


$ echo -n "https://twtxt.net/user/prologic/twtxt.txt\\n2020-07-18T12:39:52Z\\nHello World! 😊" | sha1sum | head -c 11
87fd9b0ae4e
@quark My money is on a SHA1SUM hash encoding to keep things much simpler:


$ echo -n "https://twtxt.net/user/prologic/twtxt.txt\n2020-07-18T12:39:52Z\nHello World! 😊" | sha1sum | head -c 11
87fd9b0ae4e
isn't the benefit of blake2b that it is a more efficient algo than sha1 and has the same or similar entropy to sha3? i thought we had partially solved this with some type of expanding hash size? additionally we could increase bit density by using base36 or base64/url-safe...
@xuu I don't recall where that discussion ended up being though?
@xuu I don't recall where that discussion ended up being though?
@prologic the basic idea was to stem the hash.. so you have a hash abcdef0123456789... any sub string of that hash after the first 6 will match. so abcdef, abcdef012, abcdef0123456 all match the same. on the case of a collision i think we decided on matching the newest since we archive off older threads anyway. the third rule was about growing the minimum hash size after some threshold of collisions were detected.
the stem matching is the same as how GIT does its branch hashes. i think you can stem it down to 2 or 3 sha bytes.

if a client sees someone in a yarn using a byte longer hash it can lengthen to match since it can assume that maybe the other client has a collision that it doesnt know about.
Oh. looks like its 4 chars. git show 64bf
Right I see what you mean @xuu -- Can you maybe come up with a fully fleshed out proposal for this? 🤔 This will help solve the problem of hash collision that result from the Twt/hash space growing larger over time without us having to change anything about the way we construct hashes in the first place. We just assume spec compliant clients will just dynamically handle this as the space grows.
Right I see what you mean @xuu -- Can you maybe come up with a fully fleshed out proposal for this? 🤔 This will help solve the problem of hash collision that result from the Twt/hash space growing larger over time without us having to change anything about the way we construct hashes in the first place. We just assume spec compliant clients will just dynamically handle this as the space grows.