# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 1
# self = https://watcher.sour.is/conv/dd6b4ja
My thoughts about pagination (paging)\n\nFollowing the discussion about pagination (paging) I think that's the right thing to do.\n\nFetching the same content again and again with only a marginal portion of actually new twts is unbearable and does not scale in any way. It's not only a waste of bandwidth but with increasing number of fetchers it will also become a problem for pods to serve all requests.\n\nBecause it's so easy to implement and simple to understand, splitting twtxt file in parts with next
and prev
pointers seems a really amazing solution.\n\nAs in RFC5005 there should also be a meta header pointing to the **main** URL, e.g. current
or baseurl
or something like that. This way hashes can calculated correctly even for archived twts.