Há 33 anos atrás, ainda era a Internet Portuguesa uma criança, nasceu o primeiro #Talker Português, uma comunidade virtual que, 33 anos depois, continua activa.
Parabéns, #SelvaVirtual !
https://selva.grogue.org
# I am the Watcher. I am your guide through this vast new twtiverse. # # Usage: # https://watcher.sour.is/api/plain/users View list of users and latest twt date. # https://watcher.sour.is/api/plain/twt View all twts. # https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri. # https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject. # # Options: # uri Filter to show a specific users twts. # offset Start index for quey. # limit Count of items to return (going back in time). # # twt range = 1 195429 # self = https://watcher.sour.is?offset=193895 # next = https://watcher.sour.is?offset=193995 # prev = https://watcher.sour.is?offset=193795
url metadata field unequivocally treated as the canon feed url when calculating hashes, or are they ignored if they're not *at least* proper urls? do you just tolerate it if they're impersonating someone else's feed, or pointing to something that isn't even a feed at all?url metadata field changes, should it be logged with a time so we can still calculate hashes for old posts? or should it never be updated? (in the case of a pod, where the end user has no choice in how such events are treated) or do we redirect all the old hashes to the new ones (probably this, since it would be helpful for edits too)
response.url value to fetch it again for updates without having to do extra calls (you can store it verbatim or as a flag to be able to change the proxy later).t
export async function fetchWithProxy(url, proxy=null) {
return await fetch(url).catch(err => {
if (!proxy) throw err;
return fetch(`${proxy}${encodeURIComponent(url)}`);
});
}
// Using it with
const res = await fetchWithProxy('https://twtxt.net/user/zvava/twtxt.txt', 'https://corsproxy.io/?');
// Get the working url (direct or through proxy)
const fetchingURL = res.url;
// Get the twtxt feed content (or handle errors)
const text = await res.text();
https://my-proxy?$TWTXT_URL since it allows you to define with more freedom any proxy without a prefix format.cors-anywhere or build their own (with twtxt it should just be a GET).
Collection of purple poppy seeds and two of the pods they came from. The seeds are in a labeled packet with the date, useful for when we decide to plant them
Collection of purple poppy seeds and two of the pods they came from. The seeds are in a labeled packet with the date, useful for when we decide to plant them
Collection of purple poppy seeds and two of the pods they came from. The seeds are in a labeled packet with the date, useful for when we decide to plant them
Access-Control-Allow-Origin header, so i just jumped into building a backend instead. did you find away around this limitation? :o
Captura de tela da página do curso no site do Sesc
User-Agent header appears to be fixed. \o/OPTIONS request for my feed coming from something that claims to be Firefox, pointing to your feed URL in the query. No clue what this is about. In any case, it's rejected with a 405 Method Not Allowed.If-Modified-Since or If-None-Match request headers. This way, if the feed hasn't changed, the web server can reply with a 304 Not Modified and no body at all, saving unnecessary traffic. But again, this is really not an issue for me at all. I just wanted to make sure you're aware of it, that's all. It might be even already on your agenda. Or you might decide to never do anything about it, which is also fine for me. :-)
localstorage and server-based file caching.
zs looks pretty cool! I love simple static site generators, and look forward to trying it on my next web site project. Kudos!