# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 30
# self = https://watcher.sour.is/conv/pvju5cq
@prologic Some criticisms and a possible alternative direction:

1. Key rotation. I'm not a security person, but my understanding is that it's good to be able to give keys an expiry date and replace them with new ones periodically.

2. It makes maintaining a feed more complicated. Now instead of just needing to put a file on a web server (and scan the logs for user agents) I also need to do this. What brought me to twtxt was its radical simplicity.

Instead, maybe we should think about a way to allow old urls to be rotated out? Like, my metadata could somehow say that X used to be my primary URL, but going forward from date D onward my primary url is Y. (Or, if you really want to use public key cryptography, maybe something similar could be used for key rotation there.)

It's nice that your scheme would add a way to verify the twts you download, but https is supposed to do that anyway. If you don't trust https to do that (maybe you don't like relying on root CAs?) then maybe your preferred solution should be reflected by your primary feed url. E.g. if you prefer the security offered by IPFS, then maybe an IPNS url would do the trick. The fact that feed locations are URLs gives some flexibility. (But then rotation is still an issue, if I understand ipns right.)
@falsifian In my opinion it was a mistake that we defined the first url field in the feed to define the URL for hashing. It should have been the last encountered one. Then, assuming append-style feeds, you could override the old URL with a new one from a certain point on:

# url = https://example.com/alias/txtxt.txt
# url = https://example.com/initial/twtxt.txt


# url = https://example.com/new/twtxt.txt

# url = https://example.com/brand-new/twtxt.txt


In theory, the same could be done for prepend-style feeds. They do exist, I've come around them. The parser would just have to calculate the hashes afterwards and not immediately.
@lyse This looks like a nice way to do it.

Another thought: if clients can't agree on the url (for example, if we switch to this new way, but some old clients still do it the old way), that could be mitigated by computing many hashes for each twt: one for every url in the feed. So, if a feed has three URLs, every twt is associated with three hashes when it comes time to put threads together.

A client stills need to choose one url to use for the hash when composing a reply, but this might add some breathing room if there's a period when clients are doing different things.

(From what I understand of jenny, this would be difficult to implement there since each pseudo-email can only have one msgid to match to the in-reply-to headers. I don't know about other clients.)
@falsifan@falsifan

> Key rotation

Key rotation is useful for security reasons, but I don't think it's necessary here because it's only used for verifying one's identity. It's no different (to me) than Nostr or a cryptocurrency. You change your key, you change your identity.

> It makes maintaining a feed more complicated.

This is an additional step that you'd have to perform, but I definitely wouldn't want to require it for compatibility reasons. I don't see it as any more complicated than computing twt hashes for each post, which already requires you to have a non-trivial client application.

> Instead, maybe...allow old urls to be rotated out?

That could absolutely work and might be a better solution than signatures.

> HTTPS is supposed to do [verification] anyway.

TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn't, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

> feed locations [being] URLs gives some flexibility

It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*, or a regular old URL if you wanted to. The spec seems to imply that the url tag should be a URL that clients can use to find a feed at multiple locations. I'm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a feed, it could be a great way to fix it on an individual basis without breaking any specs :)
@falsifian

> Key rotation

Key rotation is useful for security reasons, but I don't think it's necessary here because it's only used for verifying one's identity. It's no different (to me) than Nostr or a cryptocurrency. You change your key, you change your identity.

> It makes maintaining a feed more complicated.

This is an additional step that you'd have to perform, but I definitely wouldn't want to require it for compatibility reasons. I don't see it as any more complicated than computing twt hashes for each post, which already requires you to have a non-trivial client application.

> Instead, maybe...allow old urls to be rotated out?

That could absolutely work and might be a better solution than signatures.

> HTTPS is supposed to do [verification] anyway.

TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn't, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

> feed locations [being] URLs gives some flexibility

It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I'm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)
@falsifian

> Key rotation

Key rotation is useful for security reasons, but I don't think it's necessary here because it's only used for verifying one's identity. It's no different (to me) than Nostr or a cryptocurrency. You change your key, you change your identity.

> It makes maintaining a feed more complicated.

This is an additional step that you'd have to perform, but I definitely wouldn't want to require it for compatibility reasons. I don't see it as any more complicated than computing twt hashes for each post, which already requires you to have a non-trivial client application.

> Instead, maybe...allow old urls to be rotated out?

That could absolutely work and might be a better solution than signatures.

> HTTPS is supposed to do \n anyway.

TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn't, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

> feed locations \n URLs gives some flexibility

It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I'm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)
@falsifan@falsifan

> Key rotation

Key rotation is useful for security reasons, but I don't think it's necessary here because it's only used for verifying one's identity. It's no different (to me) than Nostr or a cryptocurrency. You change your key, you change your identity.

> It makes maintaining a feed more complicated.

This is an additional step that you'd have to perform, but I definitely wouldn't want to require it for compatibility reasons. I don't see it as any more complicated than computing twt hashes for each post, which already requires you to have a non-trivial client application.

> Instead, maybe...allow old urls to be rotated out?

That could absolutely work and might be a better solution than signatures.

> HTTPS is supposed to do [verification] anyway.

TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn't, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

> feed locations [being] URLs gives some flexibility

It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I'm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)
@falsifan@falsifan

> Key rotation

Key rotation is useful for security reasons, but I don't think it's necessary here because it's only used for verifying one's identity. It's no different (to me) than Nostr or a cryptocurrency. You change your key, you change your identity.

> It makes maintaining a feed more complicated.

This is an additional step that you'd have to perform, but I definitely wouldn't want to require it for compatibility reasons. I don't see it as any more complicated than computing twt hashes for each post, which already requires you to have a non-trivial client application.

> Instead, maybe...allow old urls to be rotated out?

That could absolutely work and might be a better solution than signatures.

> HTTPS is supposed to do [verification] anyway.

TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn't, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

> feed locations [being] URLs gives some flexibility

It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I'm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a feed, it could be a great way to fix it on an individual basis without breaking any specs :)
@mckinley

> > HTTPS is supposed to do \n anyway.
>
> TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn't, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

I was confused by this response for a while, but now I think I understand what you're getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I can't verify a feed unless I download it myself from the origin server. Is that right?

I.e. if the HTTPS origin server is online and I don't mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.

> > feed locations \n URLs gives some flexibility
>
> It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I'm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)

I'm also not very familiar with IPFS or IPNS.

I haven't been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if I'm too lazy to change how I publish my feed :-)
@mckinley

> > HTTPS is supposed to do [verification] anyway.
>
> TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn't, for example, verify that a file downloaded from server A is from the same entity as the one from server B.

I was confused by this response for a while, but now I think I understand what you're getting at. You are pointing out that with signed feeds, I can verify the authenticity of a feed without accessing the original server, whereas with HTTPS I can't verify a feed unless I download it myself from the origin server. Is that right?

I.e. if the HTTPS origin server is online and I don't mind taking the time and bandwidth to contact it, then perhaps signed feeds offer no advantage, but if the origin server might not be online, or I want to download a big archive of lots of feeds at once without contacting each server individually, then I need signed feeds.

> > feed locations [being] URLs gives some flexibility
>
> It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*, or a regular old URL if you wanted to. The spec seems to indicate that the url tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I'm not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)

I'm also not very familiar with IPFS or IPNS.

I haven't been following the other twts about signatures carefully. I just hope whatever you smart people come up with will be backwards-compatible so it still works if I'm too lazy to change how I publish my feed :-)
@falsifian I agree completely about backwards compatibility.
@falsifian I didn't explain it very well. TLS won't help you if you change your domain name, because it will be a completely different TLS certificate that could have been issued to anyone who wanted to pay $10/yr for that domain. How would people know it's the same person on the new domain? Now, this isn't the biggest problem for something like twtxt, but it is a reasonable concern that could be solved by signing the feed with an unchanging key.
@falsifian One of the nice things I think is that you can almost assuredly trust that the hash is a correct representation of the thread because it was computed via our content, addressing in the first place, so all you need to do yes copy it 👌
@falsifian One of the nice things I think is that you can almost assuredly trust that the hash is a correct representation of the thread because it was computed via our content, addressing in the first place, so all you need to do yes copy it 👌
@falsifian TLS won't help you if you change your domain name. How will people know if it's really you? Maybe that's not the biggest problem for something with such low stakes as twtxt, but it's a reasonable concern that could be solved using signatures from an unchanging cryptographic key.

This idea is the basis of Nostr. Notes can be posted to many relays and every note is signed with your private key. It doesn't matter where you get the note from, your client can verify its authenticity. That way, relays don't need to be trusted.
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:

Is this something that would work?
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
2. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \\([\\w-]*reply[\\w-]*\\: - Is this something that would work?
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:

Is this something that would work?
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:

Is this something that would work?
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \([\w-]*reply[\w-]*\:

Is this something that would work?
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)'
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)'
2. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)'

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \\([\\w-]*reply[\\w-]*\\:

Is this something that would work?
The tag URI scheme looks interesting. I like that it human read- and writable. And since we already got the timestamp in the twtxt.txt it would be somewhat trivial to parse. But there are still the issue with what the name/id should be... Maybe it doesn't have to bee that stick?

Instead of using tag: as the prefix/protocol, it would more it clear what we are talking about by using in-reply-to: (https://indieweb.org/in-reply-to) or replyto: similar to mailto:

1. (reply:sorenpeter@darch.dk,2024-09-15T12:06:27Z)
2. (in-reply-to:darch.dk/twtxt.txt,2024-09-15T12:06:27Z)
3. (replyto:http://darch.dk/twtxt.txt,2024-09-15T12:06:27Z)

I know it's longer that 7-11 characters, but it's self-explaining when looking at the twtxt.txt in the raw, and the cases above can all be caught with this regex: \\([\\w-]*reply[\\w-]*\\:

Is this something that would work?
@falsifian @prologic @sorenpeter @lyse I think, maybe, the way forward here is to combine an unchanging feed identifier (e.g. a public key fingerprint) with a longer hash to create a "twt hash v2" spec. v1 hashes can continue to be used for old conversations depending on client support.
Keys for identity are too much for me. This steps up the complexity by a lot. Simplicity is what made me join twtxt with its extensions. A feed URL is all I need.

Eventually, twt hashes have to change (lengthen at least), no doubt about that. But I'd like to keep it equally simple.
@lyse I think I’m with you on this. 🤔 I mean, it’s a cool and interesting topic, but it also adds lots of overhead. (And I’m not yet convinced that we actually *need* it. People don’t change URLs on a daily basis (but they do edit twts all the time).)
@lyse I think I’m with you on this. 🤔 I mean, it’s a cool and interesting topic, but it also adds lots of overhead. (And I’m not yet convinced that we actually *need* it. People don’t change URLs on a daily basis (but they do edit twts all the time).)
@lyse I think I’m with you on this. 🤔 I mean, it’s a cool and interesting topic, but it also adds lots of overhead. (And I’m not yet convinced that we actually *need* it. People don’t change URLs on a daily basis (but they do edit twts all the time).)
@lyse I think I’m with you on this. 🤔 I mean, it’s a cool and interesting topic, but it also adds lots of overhead. (And I’m not yet convinced that we actually *need* it. People don’t change URLs on a daily basis (but they do edit twts all the time).)
@movq I tend to agree too, I think the focus should be on fixing and supporting Edits first 👌
@movq I tend to agree too, I think the focus should be on fixing and supporting Edits first 👌