# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 196269
# self = https://watcher.sour.is?offset=170256
# next = https://watcher.sour.is?offset=170356
# prev = https://watcher.sour.is?offset=170156
I was not suggesting to that everyone need to setup a working webfinger endpoint, but that we take the format of nick+(sub)domain as base for generating the hashed together with the message date and content.

If we omit the protocol prefix from the way we do things now will that not solve most of the problems? In the case of gemini://gemini.ctrl-c.club/~nristen/twtxt.txt they also have a working twtxt.txt at https://ctrl-c.club/~nristen/twtxt.txt ... damn I just notice the gemini. subdomain.

Okay what about defining a prefers protocol as part of the hash schema? so 1: https , 2: http 3: gemini 4: gopher ?
I was not suggesting to that everyone need to setup a working webfinger endpoint, but that we take the format of nick+(sub)domain as base for generating the hashed together with the message date and content.

If we omit the protocol prefix from the way we do things now will that not solve most of the problems? In the case of gemini://gemini.ctrl-c.club/~nristen/twtxt.txt they also have a working twtxt.txt at https://ctrl-c.club/~nristen/twtxt.txt ... damn I just notice the gemini. subdomain.

Okay what about defining a prefers protocol as part of the hash schema? so 1: https , 2: http 3: gemini 4: gopher ?
I was not suggesting to that everyone need to setup a working webfinger endpoint, but that we take the format of nick+(sub)domain as base for generating the hashed together with the message date and content.

If we omit the protocol prefix from the way we do things now will that not solve most of the problems? In the case of gemini://gemini.ctrl-c.club/~nristen/twtxt.txt they also have a working twtxt.txt at https://ctrl-c.club/~nristen/twtxt.txt ... damn I just notice the gemini. subdomain.

Okay what about defining a prefers protocol as part of the hash schema? so 1: https , 2: http 3: gemini 4: gopher ?
[47°09′15″S, 126°43′48″W] Dosimeter malfunction
Pinellas County - Hills: 5.25 miles, 00:10:03 average pace, 00:52:48 duration
some hill (or overpass here in florida) workouts.
#running
Pinellas County - Hills: 5.25 miles, 00:10:03 average pace, 00:52:48 duration
some hill (or overpass here in florida) workouts.
#running
Pinellas County - Hills: 5.25 miles, 00:10:03 average pace, 00:52:48 duration
some hill (or overpass here in florida) workouts.
#running
The problem we are sporadically experiencing relates to content, specifically the editing of it. It breaks things.
[47°09′43″S, 126°43′59″W] Raw reading: 0x66E17831, offset +/-5
[47°09′00″S, 126°43′32″W] --white noise--
****
Los perros son mucho más sabrosos que los gatos, me lo ha dicho uno de Springfield. ⌘ Read more****
@xuu it's not really strictly required if we're just talking about identity though right? If we're talking about encryption then yes I agree rotate and keys becomes very important if you want to have attributes like perfect forward secrecy.
@xuu it's not really strictly required if we're just talking about identity though right? If we're talking about encryption then yes I agree rotate and keys becomes very important if you want to have attributes like perfect forward secrecy.
@xuu that could work too, but that requires a random value, a set of keys and signature verification of the value, which I don't really have a problem with.
@xuu that could work too, but that requires a random value, a set of keys and signature verification of the value, which I don't really have a problem with.
@xuu yes I'm less concerned about solving the integrity part of the problem of whether we can trust that the content of a feed is actually written by certain author, however, that's not to say that we shouldn't think about also leveraging keys to be able to do that maybe it's an optional feature?
@xuu yes I'm less concerned about solving the integrity part of the problem of whether we can trust that the content of a feed is actually written by certain author, however, that's not to say that we shouldn't think about also leveraging keys to be able to do that maybe it's an optional feature?
What were the recommended mitigations?
What were the recommended mitigations?
[47°09′21″S, 126°43′09″W] Reading: 1.65000 PPM
@sorenpeter There was a client that would generate a unique hash for each twt. It didn't get wide adoption.
@sorenpeter There was a client that would generate a unique hash for each twt. It didn't get wide adoption.
@prologic identity and content integrity are two different problems.
@prologic identity and content integrity are two different problems.
Key rotation is a very important feature in a system like this.
Key rotation is a very important feature in a system like this.
> the right way to solve this is to use public/private key(s) where you actually have a public key fingerprint as your feed’s unique identity that never changes.

i would rather it be a random value signed by a key. That way the key can change but the value stays the same.
> the right way to solve this is to use public/private key(s) where you actually have a public key fingerprint as your feed’s unique identity that never changes.

i would rather it be a random value signed by a key. That way the key can change but the value stays the same.
@xuu Thanks for the link. I found a pdf on one of the authors' home pages: https://ahmadhassandebugs.github.io/assets/pdf/quic_www24.pdf . I wonder how the protocol was evaluated closer to the time it became a standard, and whether anything has changed. I wonder if network speeds have grown faster than CPU speeds since then. The paper says the performance is around the same below around 600 Mbps.

To be fair, I don't think QUIC was ever expected to be faster for transferring a single stream of data. I think QUIC is supposed to reduce the impact of a dropped packet by making sure it only affects the stream it's part of. I imagine QUIC still has that advantage, and this paper is showing the other side of a tradeoff.
@movq Damn! I'm two years late to the discussion 😅 So basically, one could just make a bash script/cron job on the side for pinging non-Http feeds from time to time and the receiving end would get it IF they check their logs.
@movq Damn! I'm two years late to the discussion 😅 So basically, one could just make a bash script/cron job on the side for pinging non-Http feeds from time to time and the receiving end would get it IF they check their logs.
@movq Damn! I'm two years late to the discussion 😅 So basically, one could just make a bash script/cron job on the side for pinging non-Http feeds from time to time and the receiving end would get it IF they check their logs.
🧮 USERS:1 FEEDS:2 TWTS:1089 ARCHIVED:78724 CACHE:2505 FOLLOWERS:17 FOLLOWING:14
Interesting.. QUIC isn't very quick over fast internet.

> QUIC is expected to be a game-changer in improving web application performance. In this paper, we conduct a systematic examination of QUIC's performance over high-speed networks. We find that over fast Internet, the UDP+QUIC+HTTP/3 stack suffers a data rate reduction of up to 45.2% compared to the TCP+TLS+HTTP/2 counterpart. Moreover, the performance gap between QUIC and HTTP/2 grows as the underlying bandwidth increases. We observe this issue on lightweight data transfer clients and major web browsers (Chrome, Edge, Firefox, Opera), on different hosts (desktop, mobile), and over diverse networks (wired broadband, cellular). It affects not only file transfers, but also various applications such as video streaming (up to 9.8% video bitrate reduction) and web browsing. Through rigorous packet trace analysis and kernel- and user-space profiling, we identify the root cause to be high receiver-side processing overhead, in particular, excessive data packets and QUIC's user-space ACKs. We make concrete recommendations for mitigating the observed performance issues.

https://dl.acm.org/doi/10.1145/3589334.3645323
Interesting.. QUIC isn't very quick over fast internet.

> QUIC is expected to be a game-changer in improving web application performance. In this paper, we conduct a systematic examination of QUIC's performance over high-speed networks. We find that over fast Internet, the UDP+QUIC+HTTP/3 stack suffers a data rate reduction of up to 45.2% compared to the TCP+TLS+HTTP/2 counterpart. Moreover, the performance gap between QUIC and HTTP/2 grows as the underlying bandwidth increases. We observe this issue on lightweight data transfer clients and major web browsers (Chrome, Edge, Firefox, Opera), on different hosts (desktop, mobile), and over diverse networks (wired broadband, cellular). It affects not only file transfers, but also various applications such as video streaming (up to 9.8% video bitrate reduction) and web browsing. Through rigorous packet trace analysis and kernel- and user-space profiling, we identify the root cause to be high receiver-side processing overhead, in particular, excessive data packets and QUIC's user-space ACKs. We make concrete recommendations for mitigating the observed performance issues.

https://dl.acm.org/doi/10.1145/3589334.3645323
hmm seems like movim is a little too fancy to run on a shared hosting with no daemons..
@movq Yeah, public transport is great if it works. All too often, it just doesn't, though. :-( Unfortunately, for my trips to the offices, it's always slower than a car.

That website looks like one I would build. :'-D I just always go to bahn.de. It even works alright if the train is operated by another company. At least it's good enough for my connections (VVS, Arverio, Ding & Co.). When GoAhead took over the line from DB, their delay/cancel information on their own website were just as bad as the one relayed by DB most of the time.
@movq @bender That was indeed a funny adventure. I really had to laugh about the mess on the floor I made. :-D
Speaking of public transportation, though: *If* it works, then it’s an amazing system. I love it.

I recently took the time to find an alternative route to one of my doctors. Hardly any people using that route *and* it’s faster. Absolutely brilliant. It’s like having a chauffeur. 😅

*But* navigating through that system is also a total nightmare. Which bus takes you to which places at which times, getting info about current construction sites, all that stuff. It takes forever.

And it doesn’t help at all that this is what their website looks like:

https://movq.de/v/acb23dc1c2/s.png

You can’t move that window at the bottom. It just sits there and takes up space from the map. It gets even worse: When you ask for a route, you get to see the buses and individual stops and all that – but all in that little window with that large font! Why do we all have widescreen monitors and than stack UI items vertically?

Sure, 30 years ago it was much worse. But it could also be much better today. 😅
Speaking of public transportation, though: *If* it works, then it’s an amazing system. I love it.

I recently took the time to find an alternative route to one of my doctors. Hardly any people using that route *and* it’s faster. Absolutely brilliant. It’s like having a chauffeur. 😅

*But* navigating through that system is also a total nightmare. Which bus takes you to which places at which times, getting info about current construction sites, all that stuff. It takes forever.

And it doesn’t help at all that this is what their website looks like:

https://movq.de/v/acb23dc1c2/s.png

You can’t move that window at the bottom. It just sits there and takes up space from the map. It gets even worse: When you ask for a route, you get to see the buses and individual stops and all that – but all in that little window with that large font! Why do we all have widescreen monitors and than stack UI items vertically?

Sure, 30 years ago it was much worse. But it could also be much better today. 😅
Speaking of public transportation, though: *If* it works, then it’s an amazing system. I love it.

I recently took the time to find an alternative route to one of my doctors. Hardly any people using that route *and* it’s faster. Absolutely brilliant. It’s like having a chauffeur. 😅

*But* navigating through that system is also a total nightmare. Which bus takes you to which places at which times, getting info about current construction sites, all that stuff. It takes forever.

And it doesn’t help at all that this is what their website looks like:

https://movq.de/v/acb23dc1c2/s.png

You can’t move that window at the bottom. It just sits there and takes up space from the map. It gets even worse: When you ask for a route, you get to see the buses and individual stops and all that – but all in that little window with that large font! Why do we all have widescreen monitors and than stack UI items vertically?

Sure, 30 years ago it was much worse. But it could also be much better today. 😅
Speaking of public transportation, though: *If* it works, then it’s an amazing system. I love it.

I recently took the time to find an alternative route to one of my doctors. Hardly any people using that route *and* it’s faster. Absolutely brilliant. It’s like having a chauffeur. 😅

*But* navigating through that system is also a total nightmare. Which bus takes you to which places at which times, getting info about current construction sites, all that stuff. It takes forever.

And it doesn’t help at all that this is what their website looks like:

https://movq.de/v/acb23dc1c2/s.png

You can’t move that window at the bottom. It just sits there and takes up space from the map. It gets even worse: When you ask for a route, you get to see the buses and individual stops and all that – but all in that little window with that large font! Why do we all have widescreen monitors and than stack UI items vertically?

Sure, 30 years ago it was much worse. But it could also be much better today. 😅
@lyse talk about an epic adventure! :-D
@lyse Gosh, that sounds so horrible. 🙈🤢
@lyse Gosh, that sounds so horrible. 🙈🤢
@lyse Gosh, that sounds so horrible. 🙈🤢
@lyse Gosh, that sounds so horrible. 🙈🤢
[47°09′20″S, 126°43′53″W] Storm recedes -- back to normal work
Another idea for the upcoming Advent Of Code 2024:

OS/2 Warp 4 came with Java and that not only meant a runtime but *a JDK* including *API docs*. So, for AoC, I could try to solve as many puzzles as I can in that environment, directly on my old Pentium. For later puzzles, I’ll definitely want to switch to my normal workstation for faster development cycles – but I can still use Java and try to backport the solutions.

Sounds interesting. 🤔

https://movq.de/v/81ac0142f2/1.ff.jpg
https://movq.de/v/81ac0142f2/2.ff.jpg
Another idea for the upcoming Advent Of Code 2024:

OS/2 Warp 4 came with Java and that not only meant a runtime but *a JDK* including *API docs*. So, for AoC, I could try to solve as many puzzles as I can in that environment, directly on my old Pentium. For later puzzles, I’ll definitely want to switch to my normal workstation for faster development cycles – but I can still use Java and try to backport the solutions.

Sounds interesting. 🤔

https://movq.de/v/81ac0142f2/1.ff.jpg
https://movq.de/v/81ac0142f2/2.ff.jpg
Another idea for the upcoming Advent Of Code 2024:

OS/2 Warp 4 came with Java and that not only meant a runtime but *a JDK* including *API docs*. So, for AoC, I could try to solve as many puzzles as I can in that environment, directly on my old Pentium. For later puzzles, I’ll definitely want to switch to my normal workstation for faster development cycles – but I can still use Java and try to backport the solutions.

Sounds interesting. 🤔

https://movq.de/v/81ac0142f2/1.ff.jpg
https://movq.de/v/81ac0142f2/2.ff.jpg
Another idea for the upcoming Advent Of Code 2024:

OS/2 Warp 4 came with Java and that not only meant a runtime but *a JDK* including *API docs*. So, for AoC, I could try to solve as many puzzles as I can in that environment, directly on my old Pentium. For later puzzles, I’ll definitely want to switch to my normal workstation for faster development cycles – but I can still use Java and try to backport the solutions.

Sounds interesting. 🤔

https://movq.de/v/81ac0142f2/1.ff.jpg
https://movq.de/v/81ac0142f2/2.ff.jpg
@movq Right!
The knowledge gain was still very limited, but it actually turned out a little better than I thought. Talking to the people face to face was really nice. And we also had a surprise barbie in the end, so it was worth coming. :-D

Also, the train connections worked out. Just on the way back, I made the error to use the toilet in the train. I've experienced way worse, but there was certainly a little Urine odor in the air. Second thing I noted was a large pile of toilet paper in the bowl.

When I wanted to wash my hands, I got the soap dispenser to work, but the tap just dripped extremely slowly. Not usable. Then it clicked why there was all this paper in the loo. I tried to wipe the soap off with toilet paper as best as I could and then used my water bottle to rinse my hands. Luckily, I had topped it off before I left the office. I only had to use my jumper to increase grip for actually getting the lid off. The sparkling water happily soaked my jumper and the floor in an instant. :-D

Tip for your next train ride: Bring your own water supply, preferably non-carbonated. Alternatively, just use the office toilet beforehand.

Turns out that at least this train model has two separate water tanks. One for the faucet and another for the loo. I flushed the paper without issues before I left.
@aelaraji Yeah, that’s pretty close to what was outlined here: https://twtxt.net/twt/ansuy4a 😅
@aelaraji Yeah, that’s pretty close to what was outlined here: https://twtxt.net/twt/ansuy4a 😅
@aelaraji Yeah, that’s pretty close to what was outlined here: https://twtxt.net/twt/ansuy4a 😅
@aelaraji Yeah, that’s pretty close to what was outlined here: https://twtxt.net/twt/ansuy4a 😅
[47°09′22″S, 126°43′49″W] Wind speed: 95kph -- batteries low
@prologic Unfortunately it only work if I pull the feed in debug mode jenny -D otherwise, it misses things up if I add that snippet of text to links in my .config/jenny/follow file 😅 Anyway, it was a nice try.
@prologic Unfortunately it only work if I pull the feed in debug mode jenny -D otherwise, it misses things up if I add that snippet of text to links in my .config/jenny/follow file 😅 Anyway, it was a nice try.
@prologic Unfortunately it only work if I pull the feed in debug mode jenny -D otherwise, it misses things up if I add that snippet of text to links in my .config/jenny/follow file 😅 Anyway, it was a nice try.
So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.

What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:

The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.

The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?

From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.

I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.


# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w 
# sig: BEGIN SALTPACK SIGNED MESSAGE. ... 
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...
So this is a great thread. I have been thinking about this too.. and what if we are coming at it from the wrong direction? Identity being tied to a given URL has always been a pain point. If i get a new URL its almost as if i have a new identity because not only am I serving at a new location but all my previous communications are broken because the hashes are all wrong.

What if instead we used this idea of signatures to thread the URLs together into one identity? We keep the URL to Hash in place. Changing that now is basically a no go. But we can create a signature chain that can link identities together. So if i move to a new URL i update the chain hosted by my primary identity to include the new URL. If i have an archived feed that the old URL is now dead, we can point to where it is now hosted and use the current convention of hashing based on the first url:

The signature chain can also be used to rotate to new keys over time. Just sign in a new key or revoke an old one. The prior signatures remain valid within the scope of time the signatures were made and the keys were active.

The signature file can be hosted anywhere as long as it can be fetched by a reasonable protocol. So say we could use a webfinger that directs to the signature file? you have an identity like frank@beans.co that will discover a feed at some URL and a signature chain at another URL. Maybe even include the most recent signing key?

From there the client can auto discover old feeds to link them together into one complete timeline. And the signatures can validate that its all correct.

I like the idea of maybe putting the chain in the feed preamble and keeping the single self contained file.. but wonder if that would cause lots of clutter? The signature chain would be something like a log with what is changing (new key, revoke, add url) and a signature of the change + the previous signature.


# chain: ADDKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w 
# sig: BEGIN SALTPACK SIGNED MESSAGE. ... 
# chain: ADDURL https://txt.sour.is/user/xuu
# sig: BEGIN SALTPACK SIGNED MESSAGE. ...
# chain: REVKEY kex14zwrx68cfkg28kjdstvcw4pslazwtgyeueqlg6z7y3f85h29crjsgfmu0w
# sig: ...
[47°09′20″S, 126°43′43″W] Working impossible due to blizzard
****
A morir al palo. ⌘ Read more****
IMO we just have to fix the identity problem and figure out how to detect or support edits.
IMO we just have to fix the identity problem and figure out how to detect or support edits.
@sorenpeter No, this is what I want to avoid. For many reasons I stated before, content addressing or hashing is far better here for threading in a decentralized way.
@sorenpeter No, this is what I want to avoid. For many reasons I stated before, content addressing or hashing is far better here for threading in a decentralized way.
@prologic do that mean that for every new post (not replies) the client will have to generate a UUID or similar when posting and add that to to the twt?
@prologic do that mean that every new post (not replies) will have to generate their own UUID or similar when posting?
@prologic do that mean that for every new post (not replies) the client will have to generate a UUID or similar when posting and add that to to the twt?
@prologic do that mean that for every new post (not replies) the client will have to generate a UUID or similar when posting and add that to to the twt?
@prologic do that mean that for every new post (not replies) the client will have to generate a UUID or similar when posting and add that to to the twt?
[47°09′55″S, 126°43′36″W] Working impossible due to thunderstorm
- ¡Había una ratita rosa! Te he salvado la vida. ¡Dame comida!-
/https://duque-terron.cat/media/photos/IMG_1837.jpeg) #catsoftwtxt
- ¡Había una ratita rosa! Te he salvado la vida. ¡Dame comida!-
#catsoftwtxt
- ¡Había una ratita rosa! Te he salvado la vida. ¡Dame comida!-
#catsoftwtxt
[47°09′41″S, 126°43′31″W] Automatic systems disengaged due to heavy rain
Merci, @movq! I will keep you posted. :-)
@movq Same here for sure. :-D Great, I just saw the start was postponed by yet another half hour. I could have slept longer. Well, gonna catch the later train then.
@prologic yup.
@lyse I personally think that we just go with a magic timestamp approach. It's simpler and easier to implement across the major clients that are still actively developed.

The question is how much time do we give ourselves as we're all a bit time poor and I can't imagine we would do this quickly.
@lyse I personally think that we just go with a magic timestamp approach. It's simpler and easier to implement across the major clients that are still actively developed.

The question is how much time do we give ourselves as we're all a bit time poor and I can't imagine we would do this quickly.
@movq if you do win the lottery, don't forget to include us so we can all join in and share the things that we like to tinker with instead of this whole rat race. 🤣
@movq if you do win the lottery, don't forget to include us so we can all join in and share the things that we like to tinker with instead of this whole rat race. 🤣
@bender Big photo capability upgrade?
@bender Big photo capability upgrade?
@aelaraji Nice hack! 👌
@aelaraji Nice hack! 👌
I wonder if bento has slightly missed the key to being a total genius approach to host management. ok hear me out. each node periodically pulls configuration from a coordination node that hosts a binary cache. the admin may make changes and pre-build them maybe kick off an update task manually if they want, but the point is there's an automated checkin. for my case, the device I have available for coordination isn't really capable of hosting a binary cache for any of my other machines. the nix store for my dev machine is larger than the entire disk of the coordinator! and due to the yearly heat my best machine can't be reliably powered on all the time. so i started thinking to myself, "self, what if instead of having a central coordinator we fetched configuration from a reliable git mirror (maybe git+torrent some day) and consume it as a flake. the source could even be swapped out using a flake registry (so you don't even have to commit to self-hosting anything other than a json file). then managed hosts only have to be setup to consume the registry and the shared flake (which registers the update agent) and DONE?"
🧮 USERS:1 FEEDS:2 TWTS:1088 ARCHIVED:78704 CACHE:2506 FOLLOWERS:17 FOLLOWING:14
@movq @prologic Hey! I may have found a silly trick to announce my following to people hosting their feeds on the Gemini space using the requested URI itself instead of relaying on the USER Agent 😂. I've copied my current feed over to my (to be) Gemlog for testing. And if I do a jenny -D "gemini://gem.aelaraji.com/twtxt.txt?follower=aelaraji@https://aelaraji.com/twtxt.txt" and this happens:

A) As a follower, I get the feed as usual.
B) As the feed owner, I get this in logs:

> hostname:1965 - "gemini://gem.aelaraji.com/twtxt.txt?follower=aelaraji@https://aelaraji.com/twtxt.txt" 20 "text/plain;lang=en-US"

You could do the same for Gopher feeds but only if you want to announce yourself by throwing in an error in their logs, then you'll need a second request to fetch the feed. jenny -D "gopher://gopher.aelaraji.com/twtxt.txt&follower=aelaraji@https:/aelaraji.com/twtxt.txt" gave me this :

> gopher.aelaraji.com:70 - [09/Sep/2024:22:08:54 +0000] "GET 0/twtxt.txt&follower=aelaraji@https:/aelaraji.com/twtxt.txt HTTP/1.0" 404 0 "" "Unknown gopher client"

NB: the follower=... string won't appear in gopher logs after a ? but if I replace it with a + or a & and it works. There will be a missing / after the https:. Probably a client thing.
@movq @prologic Hey! I may have found a silly trick to announce my following to people hosting their feeds on the Gemini space using the requested URI itself instead of relaying on the USER Agent 😂. I've copied my current feed over to my (to be) Gemlog for testing. And if I do a jenny -D "gemini://gem.aelaraji.com/twtxt.txt?follower=aelaraji@https://aelaraji.com/twtxt.txt" and this happens:

A) As a follower, I get the feed as usual.
B) As the feed owner, I get this in logs:

> hostname:1965 - "gemini://gem.aelaraji.com/twtxt.txt?follower=aelaraji@https://aelaraji.com/twtxt.txt" 20 "text/plain;lang=en-US"

You could do the same for Gopher feeds but only if you want to announce yourself by throwing in an error in their logs, then you'll need a second request to fetch the feed. jenny -D "gopher://gopher.aelaraji.com/twtxt.txt&follower=aelaraji@https:/aelaraji.com/twtxt.txt" gave me this :

> gopher.aelaraji.com:70 - [09/Sep/2024:22:08:54 +0000] "GET 0/twtxt.txt&follower=aelaraji@https:/aelaraji.com/twtxt.txt HTTP/1.0" 404 0 "" "Unknown gopher client"

NB: the follower=... string won't appear in gopher logs after a ? but if I replace it with a + or a & and it works. There will be a missing / after the https:. Probably a client thing.
@movq @prologic Hey! I may have found a silly trick to announce my following to people hosting their feeds on the Gemini space using the requested URI itself instead of relaying on the USER Agent 😂. I've copied my current feed over to my (to be) Gemlog for testing. And if I do a jenny -D "gemini://gem.aelaraji.com/twtxt.txt?follower=aelaraji@https://aelaraji.com/twtxt.txt" and this happens:

A) As a follower, I get the feed as usual.
B) As the feed owner, I get this in logs:

> hostname:1965 - "gemini://gem.aelaraji.com/twtxt.txt?follower=aelaraji@https://aelaraji.com/twtxt.txt" 20 "text/plain;lang=en-US"

You could do the same for Gopher feeds but only if you want to announce yourself by throwing in an error in their logs, then you'll need a second request to fetch the feed. jenny -D "gopher://gopher.aelaraji.com/twtxt.txt&follower=aelaraji@https:/aelaraji.com/twtxt.txt" gave me this :

> gopher.aelaraji.com:70 - \n "GET 0/twtxt.txt&follower=aelaraji@https:/aelaraji.com/twtxt.txt HTTP/1.0" 404 0 "" "Unknown gopher client"

NB: the follower=... string won't appear in gopher logs after a ? but if I replace it with a + or a & and it works. There will be a missing / after the https:. Probably a client thing.
if you want your computer to be able to sleep, you'll need a measuring tape and a scientific calculator. first, measure each byte that you have in RAM and take the square root. add that to your total length. we'll need that number later on.
@prologic iPhone 16 Pro Max for you, for sure. If significant other likes to take pictures as much as mine, then one for her too. That's $1,200 each (with 256GB storage).
[47°09′10″S, 126°43′06″W] Automatic systems disengaged due to blizzard
I went straight to bed after posting this and slept for 3 hours. 😩 Can’t I just win the lottery and be done with this whole “money” thing? 🤪

@lyse Oof, well, good luck. Those multi-day meetings are usually really exhausting (and mostly pointless) in our company, hopefully it’s different at yours. ✌️