# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 16
# self = https://watcher.sour.is/conv/c7dnr7q
My thoughts about range requests

Additionally to pagination also range request should be used to reduce traffic.

I understand that there are corner cases making this a complicated matter.

I would like to see a meta header saying that the given twtxt is append only with increasing timestamps so that a simple strategy can detect valid content fetched per range request.

1. read meta part per range request
2. read last fetched twt at expected range (as known from last fetch)
3. if fetched content starts with expected twt then process rest of data
4. if fetched content doesn't start with expected twt discard all and fall back to fetching whole twtxt

Pagination (e.g. archiving old content in a different file) will lead to point 4.

Of course especially pods should support range requests, correct @prologic?
@stackeffect I think your proposal is actually pretty good. Especially step 2 – that should add a lot of robustness. I hadn’t thought about doing this. 🤔

Still, I’m not sure if I’d implement that (in my client). It adds quite a bit of complexity and I’d like to keep things simple(r). Granted, I probably have a bit of an “extreme” view here: Complexity is the devil. 😈 I’m not dismissing this idea in general, I’m just speaking for my client.

(Also, I’d first like to see the pagination thingy implemented. I think we can gain *a lot* if we get all the “main” feeds down to a few kilobyte, instead of megabyte. And actually, pagination is just a different form of “range requests” …)
@stackeffect I think your proposal is actually pretty good. Especially step 2 – that should add a lot of robustness. I hadn’t thought about doing this. 🤔\n\nStill, I’m not sure if I’d implement that (in my client). It adds quite a bit of complexity and I’d like to keep things simple(r). Granted, I probably have a bit of an “extreme” view here: Complexity is the devil. 😈 I’m not dismissing this idea in general, I’m just speaking for my client.\n\n(Also, I’d first like to see the pagination thingy implemented. I think we can gain *a lot* if we get all the “main” feeds down to a few kilobyte, instead of megabyte. And actually, pagination is just a different form of “range requests” …)
@stackeffect I think your proposal is actually pretty good. Especially step 2 – that should add a lot of robustness. I hadn’t thought about doing this. 🤔

Still, I’m not sure if I’d implement that (in my client). It adds quite a bit of complexity and I’d like to keep things simple(r). Granted, I probably have a bit of an “extreme” view here: Complexity is the devil. 😈 I’m not dismissing this idea in general, I’m just speaking for my client.

(Also, I’d first like to see the pagination thingy implemented. I think we can gain *a lot* if we get all the “main” feeds down to a few kilobyte, instead of megabyte. And actually, pagination is just a different form of “range requests” …)
@stackeffect I think your proposal is actually pretty good. Especially step 2 – that should add a lot of robustness. I hadn’t thought about doing this. 🤔

Still, I’m not sure if I’d implement that (in my client). It adds quite a bit of complexity and I’d like to keep things simple(r). Granted, I probably have a bit of an “extreme” view here: Complexity is the devil. 😈 I’m not dismissing this idea in general, I’m just speaking for my client.

(Also, I’d first like to see the pagination thingy implemented. I think we can gain *a lot* if we get all the “main” feeds down to a few kilobyte, instead of megabyte. And actually, pagination is just a different form of “range requests” …)
@stackeffect Correct. Also I like your description of the algorithm, exaclty how I would do it 👌
@stackeffect Correct. Also I like your description of the algorithm, exaclty how I would do it 👌
@movq Agreed re pagination first as “low hanging fruit” 👌
@movq Agreed re pagination first as “low hanging fruit” 👌
Looks like @stackeffect is much better at writing specs than I am, maybe he should take over. :-D Steps 2-4 is exactly what I meant a couple of days ago. I quite like the idea of some kind of append header.
@movq

Don't miss step 0 (I should have made this a separate point): having a meta header promising appending twts with strictly monotonically increasing timestamps.

> (Also, I’d first like to see the pagination thingy implemented.)

In jenny I would like to see "don't process previously fetched twts" AKA "Allow the user to archive/delete old twts" feature implemented ;-)
@movq \n\nDon't miss step 0 (I should have made this a separate point): having a meta header promising appending twts with strictly monotonically increasing timestamps.\n\n> (Also, I’d first like to see the pagination thingy implemented.)\n\nIn jenny I would like to see "don't process previously fetched twts" AKA "Allow the user to archive/delete old twts" feature implemented ;-)
@stackeffect I wouldn’t delete them but simply archive them. yarnd uses an under archival format that is very similar to the way Git repositories work.
@stackeffect I wouldn’t delete them but simply archive them. yarnd uses an under archival format that is very similar to the way Git repositories work.
@lyse \nSorry, I should have mentioned your twt #vjjdara where you already described the same idea.
@lyse
Sorry, I should have mentioned your twt #vjjdara where you already described the same idea.