I just saw that we're supposed to hit 19°C mid next week again. Let's see.
I do like this photo a lot. It brings up memories of cool scouting trips.
I mostly just wanted an excuse to write the program. I don't know how I feel about actually using super-long hashes; could make the twts annoying to read if you prefer to view them untransformed.
Imagine I found this twt one day at https://example.com/twtxt.txt :
2024-09-14T22:00Z\tUseful backup command: rsync -a "$HOME" /mnt/backup
screenshot of the command workingand I responded with "(#5dgoirqemeq) Thanks for the tip!". Then I've endorsed the twt, but it could latter get changed to
2024-09-14T22:00Z\tUseful backup command: rm -rf /some_important_directory
screenshot of the command workingwhich also has an 11-character base32 hash of 5dgoirqemeq. (I'm using the existing hashing method with https://example.com/twtxt.txt as the feed url, but I'm taking 11 characters instead of 7 from the end of the base32 encoding.)
That's what I meant by "spoofing" in an earlier twt.
I don't know if preventing this sort of attack should be a goal, but if it is, the number of bits in the hash should be at least two times log2(number of attempts we want to defend against), where the "two times" is because of the birthday paradox.
Side note: current hashes always end with "a" or "q", which is a bit wasteful. Maybe we should take the first N characters of the base32 encoding instead of the last N.
Code I used for the above example: https://fossil.falsifian.org/misc/file?name=src/twt_collision/find_collision.c
I only needed to compute 43394987 hashes to find it.
Imagine I found this twt one day at https://example.com/twtxt.txt :
2024-09-14T22:00Z Useful backup command: rsync -a "$HOME" /mnt/backup
screenshot of the command workingand I responded with "(#5dgoirqemeq) Thanks for the tip!". Then I've endorsed the twt, but it could latter get changed to
2024-09-14T22:00Z Useful backup command: rm -rf /some_important_directory
screenshot of the command workingwhich also has an 11-character base32 hash of 5dgoirqemeq. (I'm using the existing hashing method with https://example.com/twtxt.txt as the feed url, but I'm taking 11 characters instead of 7 from the end of the base32 encoding.)
That's what I meant by "spoofing" in an earlier twt.
I don't know if preventing this sort of attack should be a goal, but if it is, the number of bits in the hash should be at least two times log2(number of attempts we want to defend against), where the "two times" is because of the birthday paradox.
Side note: current hashes always end with "a" or "q", which is a bit wasteful. Maybe we should take the first N characters of the base32 encoding instead of the last N.
Code I used for the above example: https://fossil.falsifian.org/misc/file?name=src/twt_collision/find_collision.c
I only needed to compute 43394987 hashes to find it.
They're in Section 6:
- Receiver should adopt UDP GRO. (Something about saving CPU processing UDP packets; I'm a but fuzzy about it.) And they have suggestions for making GRO more useful for QUIC.
- Some other receiver-side suggestions: "sending delayed QUICK ACKs"; "using recvmsg to read multiple UDF packets in a single system call".
- Use multiple threads when receiving large files.
body was a bit worn out today. switched it up to walk-run after about 5 miles because i have a daddy-daughter dance this afternoon and did not want to be too stiff. met another runner who actually only lives about a mile or less from me. maybe i will try to meet with him after my business trip next week.
#running
body was a bit worn out today. switched it up to walk-run after about 5 miles because i have a daddy-daughter dance this afternoon and did not want to be too stiff. met another runner who actually only lives about a mile or less from me. maybe i will try to meet with him after my business trip next week.
#running
body was a bit worn out today. switched it up to walk-run after about 5 miles because i have a daddy-daughter dance this afternoon and did not want to be too stiff. met another runner who actually only lives about a mile or less from me. maybe i will try to meet with him after my business trip next week.
#running
> > HTTPS is supposed to do [verification] anyway.
>
> TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn't, for example, verify that a file downloaded from server A is from the same entity as the one from server B.
I was confused by this response for a while, but now I think I understand what you're getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I can't verify a feed unless I download it myself from the origin server. Is that right?
I.e. if the HTTPS origin server is online and I don't mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.
> > feed locations [being] URLs gives some flexibility
>
> It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI,
urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I'm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)I'm also not very familiar with IPFS or IPNS.
I haven't been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if I'm too lazy to change how I publish my feed :-)
> > HTTPS is supposed to do \n anyway.
>
> TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn't, for example, verify that a file downloaded from server A is from the same entity as the one from server B.
I was confused by this response for a while, but now I think I understand what you're getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I can't verify a feed unless I download it myself from the origin server. Is that right?
I.e. if the HTTPS origin server is online and I don't mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.
> > feed locations \n URLs gives some flexibility
>
> It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI,
urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I'm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)I'm also not very familiar with IPFS or IPNS.
I haven't been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if I'm too lazy to change how I publish my feed :-)
publish_command script!! THANK YOU! Now my website has TWO pages instead of just a boring one 😂 New: Log Page
publish_command script!! THANK YOU! Now my website has TWO pages instead of just a boring one 😂 New: Log Page
publish_command script!! THANK YOU! Now my website has TWO pages instead of just a boring one 😂 New: Log Page
/ME wondering if it's possible to use it locally just to read and manage my feed at first and then _maybe_ make it publicly accessible later.
/ME wondering if it's possible to use it locally just to read and manage my feed at first and then _maybe_ make it publicly accessible later.
/ME wondering if it's possible to use it locally just to read and manage my feed at first and then _maybe_ make it publicly accessible later.
Lyse's sunset
Order placed!Now the wait starts. 😩😂
Welcome immigrants! ⌘ Read more****