# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 15
# self = https://watcher.sour.is/conv/uf2joiq
@movq @prologic Indeed, I missed them. Hmm, maybe your rotation strategy could be adjusted slightly, so that you keep let's say the ten most recent items in your main feed and only rotate the rest. That would help crappy clients that don't adhere to the spec. Such as tt.
@lyse Yeah sadly, yarnd isn't yet implementing feed archival crawling yet... So I also missed a few Twts there from @movq -- The _trouble_ is, it is far simpler to ju
@lyse Yeah sadly, yarnd isn't yet implementing feed archival crawling yet... So I also missed a few Twts there from @movq -- The _trouble_ is, it is far simpler to just mv feed feed.1 atomically instead of doing anything more complicated. So I _think_ we should just implement the archival spec fully rather than rely on hrader-to-do things that put more burden on feed authors? 🤔
@lyse Yeah sadly, yarnd isn't yet implementing feed archival crawling yet... So I also missed a few Twts there from @movq -- The _trouble_ is, it is far simpler to just mv feed feed.1 atomically instead of doing anything more complicated. So I _think_ we should just implement the archival spec fully rather than rely on hrader-to-do things that put more burden on feed authors? 🤔
hmmmm we're also going to have to be _really_ super careful when implementing the spec fully and teaching tt and yarnd to crawl archived feeds. We have to remember to use the original Twter.URL for computing hashes and not the archived feed's url we're scraping at that moment or all te hashes will be wrong. 😂 -- We did address this in the spec right? 🤔
hmmmm we're also going to have to be _really_ super careful when implementing the spec fully and teaching tt and yarnd to crawl archived feeds. We have to remember to use the original Twter.URL for computing hashes and not the archived feed's url we're scraping at that moment or all te hashes will be wrong. 😂 -- We did address this in the spec right? 🤔
hmmmm we're also going to have to be _really_ super careful when implementing the spec fully and teaching tt and yarnd to crawl archived feeds. We have to remember to use the original Twter.URL for computing hashes and not the archived fee
@prologic Wahahahahahahah. I’d rather go back doing Advent of Code. Real life sucks. In AoC, all problems are clearly stated and there’s a definite solution! 🤣 But yeah, we did: “For all feeds (main and archived), the url fields of the main feed shall be used for twt hashing. (There can be multiple url fields in the main feed, see the page metadata extension on how to select the correct one.)”
@prologic Wahahahahahahah. I’d rather go back doing Advent of Code. Real life sucks. In AoC, all problems are clearly stated and there’s a definite solution! 🤣 But yeah, we did: “For all feeds (main and archived), the url fields of the main feed shall be used for twt hashing. (There can be multiple url fields in the main feed, see the page metadata extension on how to select the correct one.)”
@prologic Wahahahahahahah. I’d rather go back doing Advent of Code. Real life sucks. In AoC, all problems are clearly stated and there’s a definite solution! 🤣 But yeah, we did: “For all feeds (main and archived), the url fields of the main feed shall be used for twt hashing. (There can be multiple url fields in the main feed, see the page metadata extension on how to select the correct one.)”
@prologic But in all seriousness, I agree, we should implement this properly. (Workarounds have their own issues. If I simply introduced a small overlap between rotations, who guarantees that other clients check my feed often enough … How large should the overlap be … and so on.)
@prologic But in all seriousness, I agree, we should implement this properly. (Workarounds have their own issues. If I simply introduced a small overlap between rotations, who guarantees that other clients check my feed often enough … How large should the overlap be … and so on.)
@prologic But in all seriousness, I agree, we should implement this properly. (Workarounds have their own issues. If I simply introduced a small overlap between rotations, who guarantees that other clients check my feed often enough … How large should the overlap be … and so on.)
@movq Exacrly 👌
@movq Exacrly 👌