https://www.sescsp.org.br/programacao/grupo-de-estudos-em-python-primeiros-passos/
#SescSP #SãoPaulo
# I am the Watcher. I am your guide through this vast new twtiverse. # # Usage: # https://watcher.sour.is/api/plain/users View list of users and latest twt date. # https://watcher.sour.is/api/plain/twt View all twts. # https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri. # https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject. # # Options: # uri Filter to show a specific users twts. # offset Start index for quey. # limit Count of items to return (going back in time). # # twt range = 1 195807 # self = https://watcher.sour.is?offset=195704 # next = https://watcher.sour.is?offset=195804 # prev = https://watcher.sour.is?offset=195604
banner
would) for clients having no knowledge of it.
display_name
will be redundant, and add to the "busy" factor. That is, the opposite of simplicity.
display_name
is worthwhile, since nick
is functionally a display name
display_name
: To show a human readable alternative for a nick, it fallback to nick
if not definedbanner
: Using the same format as avatar
but the image expected is wider, inspired by other socials around#<https://example.com/tw.txt#yyyy-mm-ddThh:mm:ssZ>
is foolproof)
@<...>
being mentions
18:16 <aelaraji> quark 🙏 much appreciated but it won't be necessary, since there isn't much to miss out on in most of where I hang out, so I could just disconnect and spare everyone else the noise
18:17 *** aelaraji (aelaraji@776014f5a3edd32f1ed19658b7b85c8c655945b0feacaedd92fe60e61a3c0ae2) has quit (/ME goes "yeeeeet..!")
18:18 <quark> No noise for me.
18:18 <quark> It’s all good.
18:18 <quark> What would IRC be without on/offs?
18:19 <quark> Preeeety boring!
18:19 <quark> Ah, he was gone.
18:19 <quark> Well, I will twtxt this to him. LOL.
url
to be used for hashing. No matter if it points to a different feed or whatever. Just unsubscribe from malicious feeds and you're done.url
is used for hashing, it must never change. Otherwise, it will break threading, as you already noticed. If your feed moves and you wanna keep the old messages in the same new feed, you still have to point to the old url
location and keep that forever. But you can add more url
s. As I said several times in the past, in hindsight, using the first url
was a big mistake. It would have been much better, if the last encountered url
were used for hashing onwards. This way, feed moves would be relatively straightforward. However, that ship has sailed. Luckily, feeds typically don't relocate.
(#abcdefghijkl https://example.com/tw.txt#:~:text=2025-10-01T10:28:00Z)
, because it can be simply hacked in to clients currently on hashv1 and provides an off-ramp to location-based addressing (though i still think the format should be changed to smth like #<abc... http://example.com/...>
so it's cleaner once we finally drop hashes)
url
(with the url as a fallback), the key could even be a public key so it can be used verifieable in crypto functions[#THREAD_ID] Hello world
and replies with (#REPLY_ID) Ahoy
) so the content can change without affecting the thread reference, and anyone can use their own schemes freely
cors-anywhere
via docker in a minute and it would work the same.url
metadata field unequivocally treated as the canon feed url when calculating hashes, or are they ignored if they're not *at least* proper urls? do you just tolerate it if they're impersonating someone else's feed, or pointing to something that isn't even a feed at all?url
metadata field changes, should it be logged with a time so we can still calculate hashes for old posts? or should it never be updated? (in the case of a pod, where the end user has no choice in how such events are treated) or do we redirect all the old hashes to the new ones (probably this, since it would be helpful for edits too)
response.url
value to fetch it again for updates without having to do extra calls (you can store it verbatim or as a flag to be able to change the proxy later).t
export async function fetchWithProxy(url, proxy=null) {
return await fetch(url).catch(err => {
if (!proxy) throw err;
return fetch(`${proxy}${encodeURIComponent(url)}`);
});
}
// Using it with
const res = await fetchWithProxy('https://twtxt.net/user/zvava/twtxt.txt', 'https://corsproxy.io/?');
// Get the working url (direct or through proxy)
const fetchingURL = res.url;
// Get the twtxt feed content (or handle errors)
const text = await res.text();
https://my-proxy?$TWTXT_URL
since it allows you to define with more freedom any proxy without a prefix format.cors-anywhere
or build their own (with twtxt it should just be a GET).
Access-Control-Allow-Origin
header, so i just jumped into building a backend instead. did you find away around this limitation? :o
User-Agent
header appears to be fixed. \o/OPTIONS
request for my feed coming from something that claims to be Firefox, pointing to your feed URL in the query. No clue what this is about. In any case, it's rejected with a 405 Method Not Allowed
.If-Modified-Since
or If-None-Match
request headers. This way, if the feed hasn't changed, the web server can reply with a 304 Not Modified
and no body at all, saving unnecessary traffic. But again, this is really not an issue for me at all. I just wanted to make sure you're aware of it, that's all. It might be even already on your agenda. Or you might decide to never do anything about it, which is also fine for me. :-)