# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 6164
# self = https://watcher.sour.is?uri=https://lyse.isobeef.org/twtxt.txt&offset=5516
# next = https://watcher.sour.is?uri=https://lyse.isobeef.org/twtxt.txt&offset=5616
# prev = https://watcher.sour.is?uri=https://lyse.isobeef.org/twtxt.txt&offset=5416
@thecanine Woof-woof! If it's already perfect, no need to disimprove. :-)
Awesome, "unable to open database file: out of memory (14)" actually means that the SQLite file cannot be created, because the parent directory does not exist. Bonus points for Open(…) being successful and only executing the first command giving me that error. Meh.
@bender Haha, the easter bunny brought me a Bad Gateway.
@bender Over here, people can put red ribbons on their fruit trees to signal that they are free to use for everyone. That's an effort to minimize the giant food waste. Meadow orchard owners who do not have the time or energy anymore to harvest themselves (I reckon a lot of them are of age nowadays), can ensure that the tasty things do not simply rot away. Also, the town hangs those ribbons on trees on municipal properties.

They introduced these ribbons a few years back. It's a really cool system. The colors of the ribbons vary from town to town. It seems most actually use yellow ribbons. The rules are to be respectful, only take what you really need (common household amounts) and be careful not to break branches, not to trample down higher grass, watch out for pants and animals, etc. Sometimes, a tree owner only grants access to a few trees. So, you're only allowed to take from the explicitly marked ones. I mean, common sense really, don't be an asshole. :-)

We just pick up what has fallen down. You're also allowed to pick directly from the tree, but the apples on the ground are already fully ripe. Or bad, but you can typically distinguish between the two rather easily. The apples that fall down early are usually full of worms. Later on, it's the ripe ones. Yeah, if a ripe one lands in a patch of spoiled ones, it's also going bad fairly quickly. So, it pays off to visit regularly and check.

Not all apples are equal, though. It's important to check the variety before gathering them. Cider apples are worthless to us. They just taste awful. Typically, these are the tiny ones, but there are also some tiny ones which are actually very delicious. So, a taste test is mandatory.

Then for apple sauce we just wash off the occasional dirt on the apples at home. Typically, you can get rid of the worst already by wiping it on the grass when picking. We simply cut them in quarters, bigger apples also in eights. Bad spots and the cores are removed. To avoid oxidation, we throw them in a bowl of water with citric acid. Once that bowl is full, we transfer them into a big pot. Rinse and repeat.

The pot has some water in it, so the apples do not scorch. Shortly before we finish cutting the apples, the stove is heated. Then, we just let the whole mass heat up. Don't forget to stir every now and then. The longer it simmers, the easier it gets to actually stir the now softer mass. It also sinks down a bit. You can also use a potato masher to help get some sort of a pulp.

When the pulp is fairly soft it's pressed through a strainer. People here call the food mill "Flotte Lotte" (quick Charlotte) after a brand name. We use the tiniest sieve with 1mm holes. Unfortunately, there's no smaller one. But it gets 99.99% of the junk out, skin, missed seeds, all the coarse stuff. After each load the food mill has to be cleared from pomance, so it doesn't plug up all the holes or worse, the coarse crap is pressed through.

For some strange reason we have not figured out, we got quite a bunch of skin pieces in the apple sauce on Wednesday. Somehow they managed to get through. Very strange, this has never happened before. To filter them out, we just passed the whole thing through the Flotte Lotte a second time.

Around 10% sugar by weight is added to help preservation. A pinch of cinnamon and then it's basically ready when mixed up properly.

Fill the apple sauce is in jars and make sure to leave enough space for some expansion when getting cooked in a moment. Wipe any spilled sauce form the glas rims, close the lids with a rubber seal and clamp 'em shut. The jars are placed in a big pot or "Einkochautomat" (translates roughly to preserving machine). It's a large pot that is electrically heated and automatically maintains the temperature using a thermostat. The water level has to be about 2/3 of the top layer of the jars (they can be stacked). Any higher is unnecessary and just wastes water. The jars get cooked for half an hour at 90°C. Then, they can be lifted out with a pairs of jar tongs. After cooling down, the clamps are removed. If a jar hasn't sealed properly, you notice it right away.

The last thing is to label and store them in the cellar or somewhere.

Eventually, pull on the rubber seal's tab to open a jar, put the apple sauce on a waffle or something else and enjoy the blast of taste in your mouth. :-)

Oh, that text got a wee bit longer than anticipated. 8-)
Made the first apple sauce of the season in around three to four hours of work. Pretty cool, very, very little waste. The jars are currently cooking.
@bender Yes, a proposal alone is certainly not enough, but a good start. Absolutely necessary in my opinion. With everything just in thin air and constantly changing (at least it appears to me that way), I'm lost.

I have the feeling that the hashing part is the most important one that should be sorted first.
@quark I definitely agree with the first part. Not so sure about the second one. Maybe it then turns out miserable, too. :-?
@bender I do hope that it ends up fancy! But maybe it turns out rather crappy. Metal working is definitely beyond my capabilities. I just find it super fascinating.
I'd also appreciate if somebody wrote a proposal. It's very hard to piece everything together across all those many conversations.
@david I plan on building an X-Y table. But with these leadscrew prices, I might as well just buy a whole import table altogther.
Oh boy, I'm looking for trapezoidal (like ACME thread) screws and nuts in left hand form. The rods are already expensive, but nuts feel like a total ripoff. A hex nut for Tr20x2 being 30mm long and 30mm in "diameter" costs me 22 bucks! O_o Just a single one, made of regular steel. A meter of rod is 21€. The more common Tr20x4 hex nut is just 7€ and the rod 17€, but 4mm pitch is a bit much for a leadscrew for semi-precision work I reckon.

Well, maybe I just use metric threads. I will sleep on this.
@prologic That can only work if I happen to have the original one as well. But what are the odds for that? Quite low I'd say. It's rare that I see a once working thread to be cactus later on. Usually, when I arrive, police already broke up the party. Yarnd might be more lucky in that it constantly pulls, but I don't.

Anyway, I won't implement that in my client. Sounds too much effort for the tiny gain.
Ta, @movq and @bender! No, that is Wäschenbeuren: https://en.wikipedia.org/wiki/W%C3%A4schenbeuren My town is in the opposite direction.

And yes, it literally took hours to remove 90% of the photos. It's the necessary evil. I'm never looking forward to the sorting process. The longer the hike, the worse the aftermath.

We had 3°C the other night, quite cold. That's the price to pay for the nice temperatures at daytime.
@prologic I'm afraid, I don't understand how the edit detection works so that it does not break threads. All I see is that some hash in a subject is missing.
Thank you very much, @prologic! <3 When leaving the unpleasant towns, one can really enjoy the stunning landscape here. Very refreshing.

Yep, these are some sick mushrooms. No idea what they are, though. Not sure if they're edible more than once or not, but I have a feeling that one should refrain from trying. The ones I photographed here were in a nature reserve. They were a bit bigger than the others we came across on meadows. Still impressive sizes nevertheless.
Yesterday's April weather offered nearly everything. Sun, rain, clouds, wind. Luckily, the rain wasn't too bad, we precautionally brought our rain jackets and took cover under some trees for 5-10 minutes. From then on, it alternated mostly between sunny and cloudy. Perfect conditions for photography.

The 16°C felt pretty cold with all the wind. Especially at the summit for a late lunch. The clouds covered the sun for almost the entire time and the wind blew hard. Being sweaty from the way up didn't help. The sun returned as soon as we packed up.

On the way home, it drizzled just a little bit, although the clouds were really dark. A nice surprise. All in all, we had a really nice hike. As a bonus, my mate established a new train ride record low to get home, despite all the Octoberfest crap going on right now.

Colorful leaves on a tree

From my 395 photos, I only kept 40: https://lyse.isobeef.org/waldspaziergang-2024-09-28/ In 18's upper left corner you can see a black beetle similar to what I've seen earlier this week. The one that rolled over its side to change directions, this one didn't, though.

The mushroom in 35 and 36 was enormous, easily 20 centimeters in diameter. We came across a few of them along our journey.
@prologic Yeah, we're out around this period, so the odds of me even joining at the end are pretty much zero.

But that shouldn't matter too much, as y'all know my point of view. I'm in the not so popular simplicity camp. ;-)

In any case, I wish you all some great fun and good discussions! :-)
Please don't turn twtxt into coorporate mail hell. :-(
@aelaraji @bender Bwahahahaa, brilliant! :'-D
@bender Hahaha, I had to look this idiom up, but you're spot on. :-D
I heard a funny saying today: Democracy is when three foxes and a bunny decide what to have for dinner.
I can't make it as I'm on a hike with a mate.
@prologic How is nick@domain any better than a feed URL? Changing the nick now also breaks threading. That's even worse than the current approach. Also, there might be multiple feeds with same nicks on one domain, e.g. on free hosters.
Phew! I now finally called it a day as well. Our customer wanted me to emergency-start implementing some changes. Got an initial version with unit tests, but the final testing must wait until Monday.
@mckinley I could have sworn that it resumed even a partial file the other week. But maybe that was because the first attempt used scp when the connection broke. And then rsync detected that only the last part of that file was incomplete and transferred the missing bits. So, lucky by accident. In any case, I will always include -P from now on. :-)
Ah, I see! Thanks, @bender.
@david Sounds lovely. :-)

We had rain all day long and my mate and I still went for a walk with our umbrellas. It was a bit wet. But now I can send my drying rack over the tub on its maiden voyage. Should have built a second rod for more capacity.
@david Enjoy the day off and fingers crossed that you survive without damages. Stay safe!
Good writeup, @anth! I agree to most of your points.

3.2 Timestamps: I feel no need to mandate UTC. Timezones are fine with me. But I could also live with this new restriction. I fail to see, though, how this change would make things any easier compared to the original format.

3.4 Multi-Line Twts: What exactly do you think are bad things with multi-lines?

4.1 Hash Generation: I do like the idea with with a new uuid metadata field! Any thoughts on two feeds selecting the same UUID for whatever reason? Well, the same could happen today with url.

5.1 Reply to last & 5.2 More work to backtrack: I do not understand anything you're saying. Can you rephrase that?

8.1 Metadata should be collected up front: I generally agree, but if the uuid metadata field were a feed URL and no real UUID, there should be probably an exception to change the feed URL mid-file after relocation.
I passed a mountainbiker with a helmet camera in the forst, saw a four centimeter long black beetle that rolled over its side to change directions and finally spotted three deer on the paddock. An hour well spent I reckon.
Finally! After hours I figured out my problems.

1. The clever Go code to filter out completely read conversations got in the way with the filtering now moved into SQL. Yeah, I also did not think that this could ever conflict. But it did. Initializing the completeConversationRead flag to true got now in my way, this caused a conversation to be removed. Simply deleting all the code around that flag solved it.

2. Generation of missing conversation roots in SQL simply used the oldest (smallest) timestamp from any direct reply in the tree. To find the missing roots I grouped by subject and then aggregated using min(created_at). Now that I optimized this to only take unread messages into consideration in the first place, I do not necessarily see the smallest child anymore (when it's already read), so the timestamp is then moved forward to the next oldest unread reply. As I do not care too much about an accurate timestamp for something made up, I just adjusted my test case accordingly. Good enough for me. :-)

It's an interesting experiment with SQLite so far. I certainly did learn a few things along the way. Mission accomplished.
@prologic Ta! Somehow, my unit tests break, though. Running the same query manually looks like it's producing a plausible looking result, though. I do not understand it.
@david As far as I understand it, auto-completion *is* working, that's the issue. :-D Instead of spamming the terminal with bucketloads of possibilities, zsh's auto-complete is nice enough to ask whether to proceed or not.
@david Weird, I always thought that rsync automatically resumes the up- or download when aborted. But the manual indicates otherwise with --partial (-P is --partial --progress).
@prologic I reckon, I could just hash the subject internally to get a shorter version.
Three feeds (prologic, movq and mine) and my database is already 1.3 MiB in size. Hmm. I actually got the read filter working. More on that later after polishing it.
@aelaraji @mckinley rsync -avzr with an optional --progress is what I always use. Ah, I could use the shorter -P, thanks @movq.
@movq Interesting, it's always good to know how things work under the hood. But I'm very glad, that I do not have to deal with this low-level stuff. :-)
@prologic @movq Luckily, we were only touched by the thunderstorm cell. Even though the sky lit up a bunch and the thunder roared, there were no close thunderbolts. But it rained cats and dogs. The air smelled lovely.
@eapl.me All the best, see you next life around. :-) On Twtxt I only meet my online friends. I'm staying in touch with some of my real life mates on IRC or e-mail. But that's fine. That's just how it goes.

Thanks, @bender. :-)
@aelaraji Hahaha, brilliant! :-D
We're now having a thunderstorm with rain, lightning and thunder and the severe weather map shows all green. I'd expect it to be violet.
Okay, I figured out the cause of the broken output. I also replaced the first subject = '' for the existing conversation roots with subject > ''. Somehow, my brain must have read subject <> ''. That equality check should not have been touched at all. I just updated the updated archive for anyone who is interested to follow along: https://lyse.isobeef.org/tmp/tt2cache.tar.bz2 (151.1 KiB)
@prologic Yeah, relational databases are definitely not the perfect fit for trees, but I want to give it a shot anyway. :-)

Using EXPLAIN QUERY PLAN I was able to create two indices, to avoid some table scans:

CREATE INDEX parent ON messages (hash, subject);
CREATE INDEX subject_created_at ON messages (subject, created_at);

Also, since strings are sortable, instead of str_col <> '' I now use str_col > '' to allow the use of an index.

But somehow, my output seems to be broken at the end for some reason, I just noticed. :-? Hmm.

The read status still gives me headache. I think I either have to filter in the application or create more meta data structures in the database.

I'm wondering if anyone here already used certain storages for tree data.
@prologic I see. I reckon, it makes to combine 1 and 2, because if we change the hashing anyway, we don't break it twice.
This organigram example got me started: https://www.sqlite.org/lang_with.html#controlling_depth_first_versus_breadth_first_search_of_a_tree_using_order_by

But I feel execution times get worse rather quickly with more data I add. Also, caching helps tremendously, executing it for the first time took over 600ms. From then on I'm down to 40ms.

I think, it's particularly bad that parents might be missing. Thus, I cannot use an index, because there is no parent to reference. But my database knowledge is fairly limited, so I have to read up on that.
There you go, @prologic, the SQLite database (with a bit more data now) and the sqlitebrowser project file containing the query: https://lyse.isobeef.org/tmp/tt2cache.tar.bz2 (133.9 KiB)
@falsifian I agreee. It's an optional header.
@movq Oha! @bender Happy cooling off!
@prologic Well, mentions are also quite lengthy as they always include the feed URL. I know, that's not a good argument.

I just got a very, very wild idea that I have not put any brain power into, so it might be totally stupid: Since many replies also mention the original feed, maybe a mention and thread identifier could be compbined, something like: @<nick url timestamp>. But then we would also need another style if one does not want to mention the original author.

So, scratch that. But I put it out there anyway. Maybe this inspires someone else to come up with something neat.
@prologic Not sure how many actually care about a 140 character limit. I don't. Not at all.
@prologic I'm wondering what exactly you mean by incremental changes, what are the individual ones? What do you have in mind?
@prologic I find it quite hard to rank the facets. Some go hand in hand or depend on the protocol that a feed is offered. I feel some are only relevant to specific clients. I'm sure, people interpret some of them differently.

I'm curious, is it possible to see each individual poll submission?
I'm experimenting with SQLite and trees. It's going good so far with only my own 439 messages long main feed from a few days ago in the cache. Fetching these 632 rows took 20ms:

SQL query to build up the conversation trees in the cache

Now comes the real tricky part, how do I exclude completely read threads?
@movq Heaps of mozzies and other stuff that wants to eats you. Yeah, I noticed that as well. But I don't know if it's really more than usual. I might just have forgotten how bad it was in the past by now. :-?

With the wet beginning this year, water-loving insects certainly got a head start.
Voilà: https://git.mills.io/yarnsocial/yarn/pulls/1181
@prologic Correct. The plan is that operators have to manually trust a peer before it is used for fetching missing conversation roots from. Preview of the horrible UI:

New trust level management in the Peer Management page
@bender Yeah, it was nice. 23°C and a bit of wind. Quite acceptable in my opinion. :-)
@prologic @movq In all reality, even seconds precision would be enough for this new feed announcement bot. It just has to delay or predate its messages. It hopefully does not find new feeds all the time. :-)
@prologic What should happen if the archive chain is detected to be broken? I don't think that including the hash in the prev field does really help us in reality. What if messages in the archive feed themselves got lost? You can't detect this unless you've already known about them. I reckon we can simply use the relative path and call it good. I know, I know, we have this format already today. But in my opinion, the hash does not add value.
@prologic The Content-Type should probably even include the charset=utf-8 as we learned recently. :-) Iff you want to keep the UTF-8 encoding mandatory. It doesn't say anything about it in that document.
@prologic The reply-to can come anywhere in the message text? Most examples even put it at the very end. Why relax that? It currently has to be at the beginning, which I think makes parsing easier. I have to admit, at the end makes reading the raw feed nicer. But multi-line messages with U+2028 ruin the raw feed reading experience very quickly.
@prologic For hash calculation we could maybe rethink the newlines and use tabs instead. This is more in line with the twtxt file format itself. With tabs it also is much closer to the registry format (minus the nick).

What about the timestamp format? Just verbatim as it appears in the feed (what I would recommend) or any other shenanigans with normalization, like +00:00 → Z?

An append style is not required, btw. If one uses prepend style feeds, the new URL simply comes at the beginning of the file, where the old URL is further down.

Clients must use the full-length hash in their storages, but only use the first eleven digits when referencing? This differentiation is a bit odd.
@prologic The multline example is broken. I don't see any "pipes".
@prologic I notice that in your document it says reply-to, where in the ReplyTo Extension it's without the hyphen. (But they also use different values after the colon. :-))
Thanks again for typing it up, @movq! I left a few comments there. Currently, I'm in favor of the location-based adressing, that's heaps simpler.
@sorenpeter Excellent point! I agree.
@bender @prologic @aelaraji Everything entering over Pod Gossiping is only cached temporarily, but never archived. So, it eventually fell off the cache. If my fake feeds were still up, yarnd would have pulled it from me again. I ran into the situation locally as well and then got it back, though.
@movq Awesome, thank you very much! I'll have a look at it tomorrow.
It was beautiful in nature: https://lyse.isobeef.org/waldspaziergang-2024-09-21/

Fresh hay bales on a field
@prologic Let me try:

Invent anything you want, say feed A writes message text B at timestamp C. You simply create the hash D for it and reply to precisely that D as subject in your own feed E with your message text F at timestamp G. This gets hashed to H.

Now then, some a client J fetches your feed E. It sees your response from time G with text F where in the subject you reference hash D. Since client J does not know about hash D, it simply asks some peers about it. If it happens to query your yarnd for it, you could happily serve it your invention: "You wanna know about hash D? Oh, that's easy, feed A wrote B at time C."

The client J then verifies it and since everthing lines up, it looks legitimate and puts this record in its cache or displays it to the user or whatever. It does not even matter, if the client J follows feed A or not. The message text B at C with hash D could have just deleted or edited in the meantime.

Congrats, you successfully spread rumors. :-D
@prologic This does not hold if the edit happened before I even got the original.
@falsifian Something similar exists over at https://search.twtxt.net/. But a usable search engine would be actually nice (to be fair, yarns improved a bit). :-) I don't care about feed changes over time. In fact, it would even feel creepy to me. Of course, anyone could still surveil, but I'm not looking forward to these stats.
@movq We could still let the client display a warning if it cannot verify it. But yeah.
@movq Reminds me of this beautiful face recognition failure: https://qz.com/823820/carnegie-mellon-made-a-special-pair-of-glasses-that-lets-you-steal-a-digital-identity :-D
@prologic What exactly?
@prologic Just what @bender did. :-D If he'd additionally serve the fake message from his yarnd twt endpoint, everybody querying that hash from him (or any other yarnd that synced it in the meantime) would believe, that I didn't like Australians.

In fact, I really don't. I love'em! 8-)

We would need to sign each message in a feed, so others could verify that this was actually part of that feed and not made up. But then we end up in the crypto debate for identities again, which I'm not a big fan of. :-)

I just want to highlight, one might get a false sense of message authenticity, if one just briefly looks at the hashes.
@movq Ah, cool. :-)
It just occurs to me we're now building some kind of control structures or commands with (edit:…) and (delete:…) into feeds. It's not just a simple "add this to your cache" or "replace the cache with this set of messages" anymore. Hmm. We might need to think about the consequences of that, can this be exploited somehow, etc.
@movq Not sure if I like the idea of keeping the original message around. It goes against the spirit of an edit in my mind.

If that's what we want to enforce, forget about my other message above in the thread.
@prologic @movq I still don't understand it. If the original message has been replaced with the edited one, I cannot verify that the original was in the same feed. I don't know the original text.
Hahahahahaahaaaahaaaaaa, brilliant! I love it, @bender! :'-D
@movq Thanks for the summary!

So, what would happen if there is no original message anymore in the feed and you encounter an "edit" subject? Since you cannot verify that the feed contained it in the first place, would you obey it?

Some feed could just make a client update something from a different feed. In the cache, the client would need to store in a flag that this message was updated, so that when it later encounters the message from the real feed, it has a chance of reverting that bogus edit. Hmm. The devil is in the detail.

It's much easier with a delete subject. When it finds the message in its cache and the feeds match, remove it. Otherwise, just ignore it.
@movq Right. That's why, I'd bite the bullet and go for huge URLs. :-)

I havent't looked at the code and I'm too lazy right now, does jenny also verify the fetched result against the hash?
@movq Yeah, but hashing also uses the main feed URL or whatever is written in the feed's first url metadata field. So, it's not a new problem, it's exactly the same.
@movq @david Yeah, he got a bit older but I could still easily recognize him.
Another thing: At the moment, anyone could claim that some feed contained a certain message which was then removed again by just creating the hash over the fake message in said feed and invented timestamp themselves. Nobody can ever verify that this was never the case in the first place and completely made up. So, our twt hashes have to be taken with a grain of salt.
@david Cool idea actually! The hash would also be shorter than the raw URL and timestamp.
@prologic I get where you're coming from. But is it really that bad in practice? If you follow any link somewhere in the web, you also don't know if its contents has been changed in the meantime. Is that a problem? Almost never in my experience.

Granted, it's a nice property when one can tell that it was not messed with since the author referenced it.
@movq The more I think about it, the more do I like the location-based addressing. That feels fairly in line with the spirit of twtxt, just like you stated somewhere else.

The big downside for me is that the subjects then become super long.

And if the feed relocates, we end up with broken conversation trees again. Just like nowadays. At least it's not getting worse. :-)

Using the feed URL in there might become a little challenging for new folks, when the twt rotates away into archive feeds. But I reckon, we already have a similar situation with the hashes. So, probably not too bad.
@quark Yeah, let's see what they reveal!
Nice, @david! The winter palms look nice. And the sky is full of snow.
Yesterday, both temperature and wind picked up. There was even wind in the night, which is rare over here. Today, we also got a lot of sunshine, around 22°C and heaps of wind. The leaves and twigs were blown at the house door, it reminded me of a snow drift, basically a leave bank. I should have taken a photo before I swept it, it looked quite bizarre.

But I photographed something else instead:

Possibly a large roof panel on a crane

My mate and I went out in the woods earlier and we came across 08 which broke off in roughly 6, 7 meters from 09. When it hit the ground, it made a 30 cm deep hole. Quite impressive. https://lyse.isobeef.org/waldspaziergang-2024-09-19/
@falsifian Yeah, delete requests feel very odd.
@prologic I wish that was true! But I reckon there is still heaps of old stuff out there, that was created on a Windows machine. :-D And I wouldn't be surprised if even today in that environment a new file does not make use of UTF-8.
@quark I'm not convinced. :-D
@quark @movq Yep, they're all RFC3339. Obviously, +02:00 and +01:00 are best, because I use them! :-P In all seriousness, Z might be the best timezone, as it is shortest. And regarding privacy, it leaks the least information about the user's rough location. But of course, one can just look at the activity and narrow down plausible regions, so that's a weak argument.
@falsifian I can confirm, it's fixed. Thank you! Indeed, this is some wild quoting.

I still do not understand why the encoding suddenly broke, though. :-? Anyway. I concentrate on my rewrite and do things the right™ way. ;-) Still long ways to go.
@bender I know, I know… A relative time in a static HTML document is questionable at best. ;-)
Now WTF!? Suddenly, @falsifian's feed renders broken in my tt Python implementation. Exactly what I had with my Go rewrite. I haven't touched the Python stuff in ages, though. Also, tt and tt2 do not share any data at all.

By any chance, did you remove the ; charset=utf-8 from your Content-Type: text/plain header, falsifian?

interpreted in some crappy windows charset