# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 196269
# self = https://watcher.sour.is?offset=170456
# next = https://watcher.sour.is?offset=170556
# prev = https://watcher.sour.is?offset=170356
@mckinley Thanks for the feedback.

1. Yeah I agrees that nick sound not be part of syntax. Any valid URL to a twtxt.txt-file should be enough and is more clear, so it is not confused with a email (one of the the issues with webfinger and fedivese handles)
2. I think any valid URL would work, since we are not bound to look for exact matches. Accepting both http and https as well as a gemni and gophe could all work as long as the path to the twtxt.txt is the same.
3. My idea is that you quote the timestamp as it is in the original twtxt.txt that you are referring to, so you can do it by simply copy/pasting. Also what are the change that the same _human_ will make two different posts within the same second?!

Regarding the whole cryptographic keys for identity, to me it seems like an unnecessary layer of complexity. If you move to a new house or city you tell people that you moved - you can do the same in a twtxt.txt. Just post something like "I move to this new URL, please follow me there!" I did that with my feeds at least twice, and you guys still seem to read my posts:)
@mckinley Thanks for the feedback.

1. Yeah I agrees that nick sound not be part of syntax. Any valid URL to a twtxt.txt-file should be enough and is more clear, so it is not confused with a email (one of the the issues with webfinger and fedivese handles)
2. I think any valid URL would work, since we are not bound to look for exact matches. Accepting both http and https as well as a gemni and gophe could all work as long as the path to the twtxt.txt is the same.
3. My idea is that you quote the timestamp as it is in the original twtxt.txt that you are referring to, so you can do it by simply copy/pasting. Also what are the change that the same _human_ will make two different posts within the same second?!

Regarding the whole cryptographic keys for identity, to me it seems like an unnecessary layer of complexity. If you move to a new house or city you tell people that you moved - you can do the same in a twtxt.txt. Just post something like "I move to this new URL, please follow me there!" I did that with my feeds at least twice, and you guys still seem to read my posts:)
@mckinley Thanks for the feedback.

1. Yeah I agrees that nick sound not be part of syntax. Any valid URL to a twtxt.txt-file should be enough and is more clear, so it is not confused with a email (one of the the issues with webfinger and fedivese handles)
2. I think any valid URL would work, since we are not bound to look for exact matches. Accepting both http and https as well as a gemni and gophe could all work as long as the path to the twtxt.txt is the same.
3. My idea is that you quote the timestamp as it is in the original twtxt.txt that you are referring to, so you can do it by simply copy/pasting. Also what are the change that the same _human_ will make two different posts within the same second?!

Regarding the whole cryptographic keys for identity, to me it seems like an unnecessary layer of complexity. If you move to a new house or city you tell people that you moved - you can do the same in a twtxt.txt. Just post something like "I move to this new URL, please follow me there!" I did that with my feeds at least twice, and you guys still seem to read my posts:)
@mckinley Thanks for the feedback.

1. Yeah I agrees that nick sound not be part of syntax. Any valid URL to a twtxt.txt-file should be enough and is more clear, so it is not confused with a email (one of the the issues with webfinger and fedivese handles)
2. I think any valid URL would work, since we are not bound to look for exact matches. Accepting both http and https as well as a gemni and gophe could all work as long as the path to the twtxt.txt is the same.
3. My idea is that you quote the timestamp as it is in the original twtxt.txt that you are referring to, so you can do it by simply copy/pasting. Also what are the change that the same _human_ will make two different posts within the same second?!

Regarding the whole cryptographic keys for identity, to me it seems like an unnecessary layer of complexity. If you move to a new house or city you tell people that you moved - you can do the same in a twtxt.txt. Just post something like "I move to this new URL, please follow me there!" I did that with my feeds at least twice, and you guys still seem to read my posts:)
[47°09′15″S, 126°43′37″W] Bad satellite signal -- switching to analog communication
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \\([\\w-]*reply[\\w-]*\\:

Is this something that would work?
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. Z)' 2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)'
2. Z)' I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \\([\\w-]*reply[\\w-]*\\: Is this something that would work?
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:

Is this something that would work?
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:

Is this something that would work?
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:

Is this something that would work?
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
2. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \\([\\w-]*reply[\\w-]*\\: - Is this something that would work?
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:

Is this something that would work?
@prologic When the next hype train departs. :-)
Thank you @aelaraji, I'm glad you like it. I use PHP because it's everywhere on cheap hosting and no need for the user to log into a terminal to setup it up. Timeline is not mean to be use locally. For that I think something like twtxt2html is a better fit. (and happy to see you using simple.css on you new log page;)
Thank you @aelaraji, I'm glad you like it. I use PHP because it's everywhere on cheap hosting and no need for the user to log into a terminal to setup it up. Timeline is not mean to be use locally. For that I think something like twtxt2html is a better fit. (and happy to see you using simple.css on you new log page;)
Thank you @aelaraji, I'm glad you like it. I use PHP because it's everywhere on cheap hosting and no need for the user to log into a terminal to setup it up. Timeline is not mean to be use locally. For that I think something like twtxt2html is a better fit. (and happy to see you using simple.css on you new log page;)
Thank you @aelaraji, I'm glad you like it. I use PHP because it's everywhere on cheap hosting and no need for the user to log into a terminal to setup it up. Timeline is not mean to be use locally. For that I think something like twtxt2html is a better fit. (and happy to see you using simple.css on you new log page;)
On my blog: Chosen https://john.colagioia.net/blog/2024/09/15/chosen.html #fiction #freeculture
[47°09′52″S, 126°43′08″W] Dosimeter fixed
That's an interesting side effect to the new Discover feature that I added sometime ago that only displays one post per feed. That is when you're not logged in and viewing my pod's front page. You can pretty easily and roughly see what the monthly active view account is just by looking at the pager size. 🤔
That's an interesting side effect to the new Discover feature that I added sometime ago that only displays one post per feed. That is when you're not logged in and viewing my pod's front page. You can pretty easily and roughly see what the monthly active view account is just by looking at the pager size. 🤔
Amazingly though it seems to be slightly better to VPN in. 🤔
Amazingly though it seems to be slightly better to VPN in. 🤔
But you know speedtest.net I believe is a bit of a liar and I'm quite sure they do something to make sure the speed test come up good even remote areas the real speed test my actual surfer infrastructure is quite piss poor 🤣
But you know speedtest.net I believe is a bit of a liar and I'm quite sure they do something to make sure the speed test come up good even remote areas the real speed test my actual surfer infrastructure is quite piss poor 🤣
Even though we're quite a ways from any suburban areas, even with the Internet access via cell towers this poor, using my pod is still very snappy. 👌
Even though we're quite a ways from any suburban areas, even with the Internet access via cell towers this poor, using my pod is still very snappy. 👌
When will the AI hype die down?
When will the AI hype die down?
@lyse Thanks!
@lyse Thanks!
@stigatle Yeah, the sudden drop makes it feel worse than it is. It made me wear a beanie and gloves on my bike ride on Friday evening. In a few weeks I consider the same temperatures not an issue anymore, maybe even nicely warm. ;-) The body is fairly quick to adopt, but not that fast.

I just saw that we're supposed to hit 19°C mid next week again. Let's see.
@off_grid_living Oh dear, what an epic adventure! Terrible at the time, but hilarious to tell later on. :-D

I do like this photo a lot. It brings up memories of cool scouting trips.
@off_grid_living Hahaha, this is really great, I love it! :-D
[47°09′14″S, 126°43′09″W] Dosimeter malfunction
@off_grid_living Still a bit different, but this reminds me of the rusk boy on the Brandt boxes which is kinda iconic over here: https://cdn.idealo.com/folder/Product/2151/8/2151814/s1_produktbild_max/brandt-der-markenzwieback-225-g.jpg They should switch to this photo. :-)
@off_grid_living It's kinda cool to see how small cars were back in the days. Especially the left one looks really tiny.
Happy birthday @prologic! :-)
[47°09′50″S, 126°43′17″W] Saalmi, retransmit, please
@falsifian One of the nice things I think is that you can almost assuredly trust that the hash is a correct representation of the thread because it was computed via our content, addressing in the first place, so all you need to do yes copy it 👌
@falsifian One of the nice things I think is that you can almost assuredly trust that the hash is a correct representation of the thread because it was computed via our content, addressing in the first place, so all you need to do yes copy it 👌
@bender 🤣
@bender 🤣
Well, we can’t have it both ways! 😅 Should we assume twtxt are read by clients, and not worry about something humans won’t see? 🤭
@falsifian Yeah that's why we made them short 😅
@falsifian Yeah that's why we made them short 😅
@prologic Brute force. I just hashed a bunch of versions of both tweets until I found a collision.

I mostly just wanted an excuse to write the program. I don't know how I feel about actually using super-long hashes; could make the twts annoying to read if you prefer to view them untransformed.
🧮 USERS:1 FEEDS:2 TWTS:1093 ARCHIVED:78768 CACHE:2438 FOLLOWERS:17 FOLLOWING:14
@falsifian I think I wrote a very similar program and go myself actually and you're right we do have to change the way we encode hashes.
@falsifian I think I wrote a very similar program and go myself actually and you're right we do have to change the way we encode hashes.
@falsifian All very good points 👌 by the way, how did you find two pieces of content that hash the same when taking the last N characters of the base32 and coded hash?
@falsifian All very good points 👌 by the way, how did you find two pieces of content that hash the same when taking the last N characters of the base32 and coded hash?
@prologic earlier you suggested extending hashes to 11 characters, but here's an argument that they should be even longer than that.

Imagine I found this twt one day at https://example.com/twtxt.txt :

2024-09-14T22:00Z\tUseful backup command: rsync -a "$HOME" /mnt/backup screenshot of the command working

and I responded with "(#5dgoirqemeq) Thanks for the tip!". Then I've endorsed the twt, but it could latter get changed to

2024-09-14T22:00Z\tUseful backup command: rm -rf /some_important_directory screenshot of the command working

which also has an 11-character base32 hash of 5dgoirqemeq. (I'm using the existing hashing method with https://example.com/twtxt.txt as the feed url, but I'm taking 11 characters instead of 7 from the end of the base32 encoding.)

That's what I meant by "spoofing" in an earlier twt.

I don't know if preventing this sort of attack should be a goal, but if it is, the number of bits in the hash should be at least two times log2(number of attempts we want to defend against), where the "two times" is because of the birthday paradox.

Side note: current hashes always end with "a" or "q", which is a bit wasteful. Maybe we should take the first N characters of the base32 encoding instead of the last N.

Code I used for the above example: https://fossil.falsifian.org/misc/file?name=src/twt_collision/find_collision.c
I only needed to compute 43394987 hashes to find it.
@prologic earlier you suggested extending hashes to 11 characters, but here's an argument that they should be even longer than that.

Imagine I found this twt one day at https://example.com/twtxt.txt :

2024-09-14T22:00Z Useful backup command: rsync -a "$HOME" /mnt/backup screenshot of the command working

and I responded with "(#5dgoirqemeq) Thanks for the tip!". Then I've endorsed the twt, but it could latter get changed to

2024-09-14T22:00Z Useful backup command: rm -rf /some_important_directory screenshot of the command working

which also has an 11-character base32 hash of 5dgoirqemeq. (I'm using the existing hashing method with https://example.com/twtxt.txt as the feed url, but I'm taking 11 characters instead of 7 from the end of the base32 encoding.)

That's what I meant by "spoofing" in an earlier twt.

I don't know if preventing this sort of attack should be a goal, but if it is, the number of bits in the hash should be at least two times log2(number of attempts we want to defend against), where the "two times" is because of the birthday paradox.

Side note: current hashes always end with "a" or "q", which is a bit wasteful. Maybe we should take the first N characters of the base32 encoding instead of the last N.

Code I used for the above example: https://fossil.falsifian.org/misc/file?name=src/twt_collision/find_collision.c
I only needed to compute 43394987 hashes to find it.
@off_grid_living Aww thanks! 🤗
@off_grid_living Aww thanks! 🤗
There are certainly improvements that can be made to this tool.🤞
There are certainly improvements that can be made to this tool.🤞
[47°09′46″S, 126°43′58″W] Resetting transponder
@lyse brr, we have the same here. Starting to get cold riding motorcycle to work in the morning.
@prx I haven't messed with rdomains, but still it might help if you included the command that produced that error (and whether you ran it as root).
@prologic

They're in Section 6:

- Receiver should adopt UDP GRO. (Something about saving CPU processing UDP packets; I'm a but fuzzy about it.) And they have suggestions for making GRO more useful for QUIC.

- Some other receiver-side suggestions: "sending delayed QUICK ACKs"; "using recvmsg to read multiple UDF packets in a single system call".

- Use multiple threads when receiving large files.
[47°09′09″S, 126°43′44″W] Transponder jammed
The missing context makes it kind of hard to follow.
On my blog: Free Culture Book Club — Aumyr, part 2 https://john.colagioia.net/blog/2024/09/14/aumyr-2.html #freeculture #bookclub
[47°09′48″S, 126°43′26″W] Transfer aborted
We need more support summer software :(
Pinellas County - Long Run: 11.04 miles, 00:11:22 average pace, 02:05:22 duration
body was a bit worn out today. switched it up to walk-run after about 5 miles because i have a daddy-daughter dance this afternoon and did not want to be too stiff. met another runner who actually only lives about a mile or less from me. maybe i will try to meet with him after my business trip next week.
#running
Pinellas County - Long Run: 11.04 miles, 00:11:22 average pace, 02:05:22 duration
body was a bit worn out today. switched it up to walk-run after about 5 miles because i have a daddy-daughter dance this afternoon and did not want to be too stiff. met another runner who actually only lives about a mile or less from me. maybe i will try to meet with him after my business trip next week.
#running
Pinellas County - Long Run: 11.04 miles, 00:11:22 average pace, 02:05:22 duration
body was a bit worn out today. switched it up to walk-run after about 5 miles because i have a daddy-daughter dance this afternoon and did not want to be too stiff. met another runner who actually only lives about a mile or less from me. maybe i will try to meet with him after my business trip next week.
#running
[47°09′20″S, 126°43′53″W] Reading: 0.55 Sv
[47°09′34″S, 126°43′34″W] Reading: 1.06000 PPM
@mckinley

> > HTTPS is supposed to do [verification] anyway.
>
> TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn't, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

I was confused by this response for a while, but now I think I understand what you're getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I can't verify a feed unless I download it myself from the origin server. Is that right?

I.e. if the HTTPS origin server is online and I don't mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.

> > feed locations [being] URLs gives some flexibility
>
> It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I'm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)

I'm also not very familiar with IPFS or IPNS.

I haven't been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if I'm too lazy to change how I publish my feed :-)
@mckinley

> > HTTPS is supposed to do \n anyway.
>
> TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn't, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

I was confused by this response for a while, but now I think I understand what you're getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I can't verify a feed unless I download it myself from the origin server. Is that right?

I.e. if the HTTPS origin server is online and I don't mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.

> > feed locations \n URLs gives some flexibility
>
> It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I'm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)

I'm also not very familiar with IPFS or IPNS.

I haven't been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if I'm too lazy to change how I publish my feed :-)
edit: [...] have an unjustified* [...]*
edit: [...] have an unjustified* [...]*
edit: \n have an unjustified* \n*
@prologic 🤯 HOLLY! ... I'm definitely adding this to my Jenny's publish_command script!! THANK YOU! Now my website has TWO pages instead of just a boring one 😂

New: Log Page
@prologic 🤯 HOLLY! ... I'm definitely adding this to my Jenny's publish_command script!! THANK YOU! Now my website has TWO pages instead of just a boring one 😂

New: Log Page
@prologic 🤯 HOLLY! ... I'm definitely adding this to my Jenny's publish_command script!! THANK YOU! Now my website has TWO pages instead of just a boring one 😂

New: Log Page
@prologic does it renders threads nicely, or is it a straight, flat, timeline.
@prologic Will try it right away!
@prologic Will try it right away!
@prologic Will try it right away!
Many thanks!
Many thanks!
@aelaraji Have you considered https://git.mills.io/yarnsocial/twtxt2html
@aelaraji Have you considered https://git.mills.io/yarnsocial/twtxt2html
🧮 USERS:1 FEEDS:2 TWTS:1092 ARCHIVED:78761 CACHE:2445 FOLLOWERS:17 FOLLOWING:14
@sorenpeter !! I freaking love your Timeline ... I kind of have an justified _PHP phobia_ 😅 but, I'm definitely thinking about giving it a try!

/ME wondering if it's possible to use it locally just to read and manage my feed at first and then _maybe_ make it publicly accessible later.
@sorenpeter !! I freaking love your Timeline ... I kind of have an justified _PHP phobia_ 😅 but, I'm definitely thinking about giving it a try!

/ME wondering if it's possible to use it locally just to read and manage my feed at first and then _maybe_ make it publicly accessible later.
@sorenpeter !! I freaking love your Timeline ... I kind of have an justified _PHP phobia_ 😅 but, I'm definitely thinking about giving it a try!

/ME wondering if it's possible to use it locally just to read and manage my feed at first and then _maybe_ make it publicly accessible later.
https://galusik.fr/fridayrockmetal/2024-09-13-frm.m3u Tonight #fridayrockmetal playlist
On my blog: Toots 🦣 from 09/09 to 09/13 https://john.colagioia.net/blog/2024/09/13/week.html #linkdump #socialmedia #quotes #week
Ta, @bender! Correct, apart from resizing, no further processing on my end. That's just the Japanese sunset photo engineer's magic. :-) In all it's original glory (3.2 MiB): https://lyse.isobeef.org/abendhimmel-2024-09-13/02.JPG
@off_grid_living Looks like you're describing a captcha. They do not really work. Bots seem to solve them, too.
@movq Thanks! Yeah, one week for autumn and spring must be enough. Or so the weather thinks. Looks like there is only on or off.
Because it needs to be seeing bigger!

Lyse's sunset
@lyse pretty cool! No processing, those are the colours the camera saw, right? Amazing!
@prologic Hey, Best wishes! Have fun! 🥳