# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 6
# self = https://watcher.sour.is/conv/opw257a
My thoughts about pagination (paging)

Following the discussion about pagination (paging) I think that's the right thing to do.

Fetching the same content again and again with only a marginal portion of actually new twts is unbearable and does not scale in any way. It's not only a waste of bandwidth but with increasing number of fetchers it will also become a problem for pods to serve all requests.

Because it's so easy to implement and simple to understand, splitting twtxt file in parts with next and prev pointers seems a really amazing solution.

As in RFC5005 there should also be a meta header pointing to the **main** URL, e.g. current or baseurl or something like that. This way hashes can calculated correctly even for archived twts.
Yes, Iโ€™m also pretty confident that this is a good extension. If I have the time and energy (maybe I wonโ€™t), Iโ€™ll open a PR with a formal draft for this on the weekend. ๐Ÿ‘Œ
Yes, Iโ€™m also pretty confident that this is a good extension. If I have the time and energy (maybe I wonโ€™t), Iโ€™ll open a PR with a formal draft for this on the weekend. ๐Ÿ‘Œ
Yes, Iโ€™m also pretty confident that this is a good extension. If I have the time and energy (maybe I wonโ€™t), Iโ€™ll open a PR with a formal draft for this on the weekend. ๐Ÿ‘Œ
@stackeffect Agreed ๐Ÿ‘Œ Nicely put too ๐Ÿ˜
@stackeffect Agreed ๐Ÿ‘Œ Nicely put too ๐Ÿ˜