A ver, yo ya lo siento si no ando por aquí, pero la semana de vacaciones me lo impide 😎😎 ⌘ Read more****
If there are opposing opinions we either build a bridge or provide a new parallel road.
Also, I wouldn't call my opinion a "stance", I just wish for a better twtxt thanks to everyone's effort.
The last thing we need to do is decide a proper format for the location-based version.
My proposal is to keep the "Subject extension" unchanged and include the reference to the mention like this:
// Current hash format: starts with a '#'
(#hash) here's text
(#hash) @<nick url> here's text
// New location format: valid URL-like + '#' + TIMESTAMP (verbatim format of feed source)
(url#timestamp) here's text
(url#timestamp) @<nick url> here's text
I think the timestamp should be referenced verbatim to prevent broken references with multiple variations (especially with the many timezones out there) which would also make it even easier to implement for everyone.
I'm sure we can get @zvava, @lyse and everyone else to help on this one.
I personally think we should also consider allowing a generic format to build on custom references, this would allow for creating threads using any custom source (manual, computed or external generated), maybe using a new "Topic extension", here's some examples.
// New format for custom references: starts with a '!' maybe?
(!custom) here's text
(!custom) @<nick url> here's text
// A possible "Topic" parse as a thread root:
[!custom] start here
[custom] simpler format
This one is just an idea of mine, but I feel it can unleash new ways of using twtxt.
https://merankorii.bandcamp.com/track/the-alchemist
https://merankorii.bandcamp.com/track/the-alchemist
https://merankorii.bandcamp.com/track/the-alchemist
Dates in JavaScript are truly strange creatures.
https://nebula.tv/videos/maryspender-the-dire-straits-story-full-documentary
https://www.youtube.com/live/_g00S-a_0lo
-f bestvideo[height<=?1080]+bestaudio/best
Uno de los motivos por los que me siendo cómoda en Mastodon es que siento que los hombres aquí no son tan ultramachistas como en otros sitios.
Se nota especialmente la diferencia en los jóvenes.
Que hablar con hombres jóvenes en el mundo exterior es algo brutalmente estresante. No sabes cuanto van a tardar en defender al tecnobro de turno. ⌘ Read more****
Pues un estudio apostó por la IA para hacer películas hace un año. Han perdido un año.
https://petapixel.com/2025/09/23/movie-studio-lionsgate-is-struggling-to-make-ai-generated-films-with-runway/ ⌘ Read more****
yt-dlped https://www.youtube.com/watch?v=OZTSIYkuMlU. It's only worth for an experiment, no recommendation to watch.
Empieza el otoño y con ella la temporada de resfriados.
Ya he pillado las mascarillas, miel, limón, jengibre. Chocolate, sopas.
Paracetamoles, propolis, jarabe para la tos, caramelos de sauco, vipvaporús y regaliz. ⌘ Read more****
Printi, 360imprimir, Rei do Sticker?
(no Brasil)
captura de tela do Reclame Aqui do Rei do Sticker
/ME flips a bird to life
So, I've been working on 2 main twtxt-related projects.
The first is small Node / express application that serves up a twtxt file while allowing its owner to add twts to it (or edit it outright), and I've been testing it on my site since the night I made that post. It's still very much an MVP, and I've been intermittently adding features, improving security, and streamlining the code, with an eye to release it after I get an MVP done of project #2 (the reader).
But that's where I've been struggling. The idea _seems_ simple enough - another Node / express app (this one with a Vite-powered front-end) that reads a public twtxt file, parses the "follow" list, grabs (and parses) _those_ twtxt files, and then creates a river of twts out of the result. The pieces work fine in seclusion (and with dummy data), but I keep running into weird issues when reading real-live twtxt files, so some twts come through, while others get lost in the ether. I'll figure it out eventually, but for now, I've been spending far more time than I anticipated just trying to get it to work end-to-end.
On top of it, the 2 projects wound up turning into 4 (so far), as I've been spinning out little libraries to use across both apps (like https://jsr.io/@itsericwoodward/fluent-dom-esm, and a forthcoming twtxt helper library).
In the end, I'm hoping to have project 1 (the editor) into beta by the end of October, and project 2 (the reader) into beta sometime after that, but we'll see.
I hope this has satisfied your curiosity, but if you'd like to know more, please reach out!
Press play to hear from Marina Zurkow & James Schmitz [@hx2A@mastodon.art] the artists behind ‘The River is a Circle (Times Square Edition)’ - September’s #MidnightMoment, a visual “combination of live data and a matrix of researched information about the Hudson River ecology,” says Zurkow.»
https://www.instagram.com/reel/DO6jbXrEdBG
#CreativeCoding #Processing #Python #py5 #TimesSquare #NYC
Press play to hear from Marina Zurkow & James Schmitz [@hx2A@mastodon.art] the artists behind ‘The River is a Circle (Times Square Edition)’ - September’s #MidnightMoment, a visual “combination of live data and a matrix of researched information about the Hudson River ecology,” says Zurkow.»
https://www.instagram.com/reel/DO6jbXrEdBG
#CreativeCoding #py5 #TimesSquare #NYC
The single plum that my plum tree gave us in 2025.
The single plum that my plum tree gave us in 2025.
The single plum that my plum tree gave us in 2025.
In here it is all about control, and money.
I just got my renewed documents. Their expiration date says something like 01.09.40. Huh? That looks super weird to me, like an error. But no, it’s 2040 … Just 15 years away.
I just got my renewed documents. Their expiration date says something like 01.09.40. Huh? That looks super weird to me, like an error. But no, it’s 2040 … Just 15 years away.
Páginas da revista sobre a Noite de Processing no Garoa Hacker Clube
https://www.catarse.me/variavel (para o Brasil)
https://www.indiegogo.com/projects/variavel-magazine (para outros países)
#ProgramaçãoCriativa #CreativeCoding
Reprodução de páginas da revista onde aparecem alguns dos meus desenhos e o começo da entrevista.
The thread is defined by two parts:
1. The hash
2. The subject
The client/pod generate the *hash* and index it in it's database/cache, then it simply query the subject of other posts to find the related posts, right?
In my own client current implementation (using hashes), the only calculation is in the hash generation, the rest is a verbatim copy of the subject (minus the
# character), if this is the common implemented approach then adding the location based one is somewhat simple.s
function setPostIndex(post) {
// Current hash approach
const hash = createHash(post.url, post.timestamp, post.content);
// New location approach
const location = post.url + '#' + post.timestamp;
// Unchanged (probably)
const subject = post.subject;
// Index them all
addToIndex(hash, post);
addToIndex(location, post);
addToIndex(subject, post);
}
// Both should work if the index contains both versions
getThreadBySubject('#abcdef') => [post1, post2, post3]; // Hash
getThreadBySubject('https://example.com#2025-01-01T12:00:00') => [post1, post2, post3]; // Location
As I said before, the mention is already location based
@<example https://example.com/twtxt.txt>, so I think we should keep that in consideration.Of course this will lead to a bit of fragmentation (without merging the two) but I think this can make everyone happy.
Otherwise, the only other solution I can think of is a different approach where the value doesn't matter, allowing to use anything as a reference (hash, location, git commit) for greater flexibility and freedom of implementation (this probably need the use of a fixed "header" for each post, but it can be seen as a separate extension).
I want us to preserve Content based addressing.
Let's improve the user experience and fix the hash commission problems.
) ...
(WTF, asciiworld-sat-track somehow broke, but I have not changed any of the scripts at all. O_o It doesn't find the asciiworld-sat-calc anymore. How in the world!? When I use an absolute path, the .tle is empty and I get a parsing error. Gotta debug this.)
1. I don't see any difference between the two schemes regarding link rot and migration. If the URL changes, both approaches are equally terrible as the feed URL is part of the hashed value and reference of some sort in the location-based scheme. It doesn't matter.
2. The same is true for duplication and forks. Even today, the "cannonical URL" has to be chosen to build the hash. That's exactly the same with location-based addressing. Why would a mirror only duplicate stuff with location- but not content-based addressing? I really fail to see that. Also, who is using mirrors or relays anyway? I don't know of any such software to be honest.
3. If there is a spam feed, I just unfollow it. Done. Not a concern for me at all. Not the slightest bit. And the byte verification is THE source of all broken threads when the conversation start is edited. Yes, this can be viewed as a feature, but how many times was it actually a feature and not more behaving as an anti-feature in terms of user experience?
4. I don't get your argument. If the feed in question is offline, one can simply look in local caches and see if there is a message at that particular time, just like looking up a hash. Where's the difference? Except that the lookup key is longer or compound or whatever depending on the cache format.
5. Even a new hashing algorithm requires work on clients etc. It's not that you get some backwards-compatibility for free. It just cannot be backwards-compatible in my opinion, no matter which approach we take. That's why I believe some magic time for the switch causes the least amount of trouble. You leave the old world untouched and working.
If these are general concerns, I'm completely with you. But I don't think that they only apply to location-based addressing. That's how I interpreted your message. I could be wrong. Happy to read your explanations. :-)