# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 2172
# self = https://watcher.sour.is?uri=https://twtxt.net/user/mckinley/twtxt.txt&offset=372
# next = https://watcher.sour.is?uri=https://twtxt.net/user/mckinley/twtxt.txt&offset=472
# prev = https://watcher.sour.is?uri=https://twtxt.net/user/mckinley/twtxt.txt&offset=272
\n- A 7 character hash with 32 possible characters has 34,359,738,368 possible combinations. More than I would have thought, but it's not that many in the grand scheme of things.\n- Assuming a rate of 50,000 hashes per second, which I think might be feasible on modest consumer hardware, you're looking at about 8 days to generate all possible hashes if you have no duplicates.\n\nI'm sure the strange generation method affects the probability, but I don't know how to account for that. My math is most likely wrong as it is but I think a collision is doable.
- A 7 character hash with 32 possible characters has 34,359,738,368 possible combinations. More than I would have thought, but it's not that many in the grand scheme of things.
- Assuming a rate of 50,000 hashes per second, which I think might be feasible on modest consumer hardware, you're looking at about 8 days to generate all possible hashes if you have no duplicates.
I'm sure the strange generation method affects the probability, but I don't know how to account for that. My math is most likely wrong as it is but I think a collision is doable.
Was the info on dev.twtxt.net moved somewhere else? I can't remember exactly how the hashes work. It's the URL of the feed, the time, and the message put in a specific order and then hashed, right? Then that hash is encoded in base32 and the last 7 characters are taken from it? Do I have that completely wrong?
Was the info on dev.twtxt.net moved somewhere else? I can't remember exactly how the hashes work. It's the URL of the feed, the time, and the message put in a specific order and then hashed, right? Then that hash is encoded in base32 and the last 7 characters are taken from it? Do I have that completely wrong?
Have you happened to find a twt hash collision in your crawling adventures? If not, I wonder if it would be feasible to brute force one and see what happens.
Have you happened to find a twt hash collision in your crawling adventures? If not, I wonder if it would be feasible to brute force one and see what happens.
@prologic I don't want to pay for a domain name just for that. Out of curiosity, I went to https://dontbeevil.com/ to see if it was taken and I found an anti-Google poem but guess what else was there?
Google Analytics!
@prologic I don't want to pay for a domain name just for that. Out of curiosity, I went to https://dontbeevil.com/ to see if it was taken and I found an anti-Google poem but guess what else was there?
Google Analytics!
I'm generally not a fan of using metrics beyond typical server logs. What information would be collected and how would you use it?
I'm generally not a fan of using metrics beyond typical server logs. What information would be collected and how would you use it?
@prologic hacker-news-newest I've been telling people about this for 4 years. It resurfaces every now and then. There's always some mild outrage and then nothing gets done about it.\nIt's been time to stop using Google. They abandoned the whole "Don't be evil" thing a long time ago.
@prologic hacker-news-newest I've been telling people about this for 4 years. It resurfaces every now and then. There's always some mild outrage and then nothing gets done about it.
It's been time to stop using Google. They abandoned the whole "Don't be evil" thing a long time ago.
@prologic @darch The local Discover timeline should definitely be preserved. A global timeline would be very nice to have, but it should be separate.
@prologic @darch The local Discover timeline should definitely be preserved. A global timeline would be very nice to have, but it should be separate.
@prologic I think it's more about "The Algorithm" dictating what people do and do not see. People used to seek out material they wanted to see, now they just scroll through the algorithm-generated feeds of 2-3 websites and hope they find something interesting. It's open to interpretation :)
@prologic I think it's more about "The Algorithm" dictating what people do and do not see. People used to seek out material they wanted to see, now they just scroll through the algorithm-generated feeds of 2-3 websites and hope they find something interesting. It's open to interpretation :)
@prologic Yeah, but at least Monero protects your anonymity. I'll be the first guy in line for a solid proof-of-stake privacy coin but I don't think the technology has matured quite enough.
@prologic Yeah, but at least Monero protects your anonymity. I'll be the first guy in line for a solid proof-of-stake privacy coin but I don't think the technology has matured quite enough.
QR codes frustrate me. I don't think we should be teaching people that it's okay to scan an unintelligible barcode with their camera and instantly be transported to some arbitrary site.
On the other hand, vanity QR codes are really cool.
QR codes frustrate me. I don't think we should be teaching people that it's okay to scan an unintelligible barcode with their camera and instantly be transported to some arbitrary site.\nOn the other hand, vanity QR codes are really cool.
I agree. Everybody should have their own website. That's how the Web used to be.
The Oatmeal: Reaching people on the internet
I agree. Everybody should have their own website. That's how the Web used to be.
The Oatmeal: Reaching people on the internet
Sure, let's boycott Bitcoin and use Monero instead :)
Sure, let's boycott Bitcoin and use Monero instead :)
@prologic Yes, it does its own crawling. You can check if a particular website is indexed by searching for a domain like this: site:mckinley.cc
@prologic Yes, it does its own crawling. You can check if a particular website is indexed by searching for a domain like this: site:mckinley.cc
@movq hackew-news-newest I skimmed through that article this morning and I had a similar reaction. I don't think blogs were ever "gone" for technical people like us, but they were for a lot of other people. They're making a comeback now with the rise of Medium and Substack.
The author rightly blames search engines. A similar revelation hit me like a truck after I used Marginalia Search a few times. Give it a try.
@movq hackew-news-newest I skimmed through that article this morning and I had a similar reaction. I don't think blogs were ever "gone" for technical people like us, but they were for a lot of other people. They're making a comeback now with the rise of Medium and Substack. \nThe author rightly blames search engines. A similar revelation hit me like a truck after I used Marginalia Search a few times. Give it a try.
The language police have already won, because we're here talking about it. It all comes down to attention and internet points.
The language police have already won, because we're here talking about it. It all comes down to attention and internet points.
@prologic\n> There’s too much of this “libre” nonsense out there\n\nI realized this earlier today when I opened LibreWolf and went to librespeed.org to test my internet connection.
@prologic
> There’s too much of this “libre” nonsense out there
I realized this earlier today when I opened LibreWolf and went to librespeed.org to test my internet connection.
It seems like this per-event feed thing would be a lot of work. Personally, I think it's fine the way it is now. Publicly broadcasting unfollows went a little too far in my opinion but that was removed, right?
It seems like this per-event feed thing would be a lot of work. Personally, I think it's fine the way it is now. Publicly broadcasting unfollows went a little too far in my opinion but that was removed, right?
@adi I just typed the dollar signs into the reply box. The post on my plaintext feed is exactly what I typed. You typed two dollar signs and it didn't work, but maybe they have to be separated? Test: $ $
@adi I just typed the dollar signs into the reply box. The post on my plaintext feed is exactly what I typed. You typed two dollar signs and it didn't work, but maybe they have to be separated? Test: $ $
Yeah, it worked. Check my plaintext feed. The dollar signs turn into parenthesis escaped with backslashes when viewed on the web client.
Yeah, it worked. Check my plaintext feed. The dollar signs turn into parenthesis escaped with backslashes when viewed on the web client.
@prologic I think @off_grid_living was talking about that strange bug we talked about a while ago where two dollar signs turn into escaped parenthesis. I think I saw it happening within the last day or so. $ test $
@prologic I think @off_grid_living was talking about that strange bug we talked about a while ago where two dollar signs turn into escaped parenthesis. I think I saw it happening within the last day or so. $ test $
The user agent regex was made a little more restrictive after my git issue, but I think someone could use this and really start breaking things. I want to poke around more than I already have, but I'm not doing it on a live production instance of Yarn.
The user agent regex was made a little more restrictive after my git issue, but I think someone could use this and really start breaking things. I want to poke around more than I already have, but I'm not doing it on a live production instance of Yarn.
Honestly, the entire follow system is flawed. Check my followers, #3 was a web crawler with a user agent that happened to fit the regex, and #17 was myself requesting my own feed with a simple curl command.\nUnfortunately, I don't see a real solution to the problem while keeping the ability for external feeds to show up as "following" a user on a Yarn pod.
Honestly, the entire follow system is flawed. Check my followers, #3 was a web crawler with a user agent that happened to fit the regex, and #17 was myself requesting my own feed with a simple curl command.
Unfortunately, I don't see a real solution to the problem while keeping the ability for external feeds to show up as "following" a user on a Yarn pod.
@twtxt I didn't know unfollow events were publicly broadcasted...
@twtxt I didn't know unfollow events were publicly broadcasted...
This is the closest thing I've ever had to a "bulk optical disc ripper". Picture was taken a few years ago. It was an old Dell PC from ~2009 with three optical drives hooked up. The case only had 2 5.25" bays, so the side cover was off and the third drive was hanging out.
Lubuntu was installed on the original hard drive, and it was running 3 instances of Exact Audio Copy over Wine, one instance for each drive. I had one hell of a weekend with this thing, let me tell you.
~
This is the closest thing I've ever had to a "bulk optical disc ripper". Picture was taken a few years ago. It was an old Dell PC from ~2009 with three optical drives hooked up. The case only had 2 5.25" bays, so the side cover was off and the third drive was hanging out.\nLubuntu was installed on the original hard drive, and it was running 3 instances of Exact Audio Copy over Wine, one instance for each drive. That was one hell of a weekend, let me tell you.
~
This is the closest thing I've ever had to a "bulk optical disc ripper". Picture was taken a few years ago. It was an old Dell PC from ~2009 with three optical drives hooked up. The case only had 2 5.25" bays, so the side cover was off and the third drive was hanging out.\nLubuntu was installed on the original hard drive, and it was running 3 instances of Exact Audio Copy over Wine, one instance for each drive. I had one hell of a weekend with this thing, let me tell you.
~
@prologic I certainly would, but I seriously doubt most people would care enough about it.
@prologic I certainly would, but I seriously doubt most people would care enough about it.
Do you have an approximate number of feeds you can share with us?
Do you have an approximate number of feeds you can share with us?
\n> "We know lots of people will find it an invasion of privacy, we 100% get that, and it’s not the solution for those folks,"\n\nIn other words, "Yeah, it's an invasion of your privacy. What are you gonna do about it?"
> "We know lots of people will find it an invasion of privacy, we 100% get that, and it’s not the solution for those folks,"
In other words, "Yeah, it's an invasion of your privacy. What are you gonna do about it?"
JS-free experience, overall, isn't as bad as I thought it would be. I don't know if you can create an account without JavaScript but you can at least make and reply to posts.
Kudos for not blocking Tor exit nodes by the way @prologic.
JS-free experience, overall, isn't as bad as I thought it would be. I don't know if you can create an account without JavaScript but you can at least make and reply to posts.\nKudos for not blocking Tor exit nodes by the way @prologic.
@eldersnake How's the experience actually using PageKite? I've recommended it as an alternative to Ngrok but I haven't had a reason to use it myself.
@eldersnake How's the experience actually using PageKite? I've recommended it as an alternative to Ngrok but I haven't had a reason to use it myself.
This whole forked conversation thing is confusing. What's going on?
This whole forked conversation thing is confusing. What's going on?
There's no editing of messages, no use of the rich text buttons in the text box, no replying. I think I should be able to reply by going to a conversation page and using the reply box there. Testing that now.
There's no editing of messages, no use of the rich text buttons in the text box, no replying. I think I should be able to reply by going to a conversation page and using the reply box there. Testing that now.
Test post from Tor with JavaScript disabled. A lot of small features of the Web client rely on JS, but let's see if I can at least log in and post.
@prologic\n> duplicate posts\n\nThat's strange. I don't sync my posts between here and mckinley.cc. I have an alternate feed on my website that only contains the most recent 25 posts. You should only have duplicate messages if you follow mckinley.cc/twtxt.txt *and* mckinley.cc/twtxt-25.txt\n> Is this something we should support in Pods?\n\nI haven't seen anyone else with two distinct feeds, so a feature like that probably wouldn't be used often.
@prologic
> duplicate posts
That's strange. I don't sync my posts between here and mckinley.cc. I have an alternate feed on my website that only contains the most recent 25 posts. You should only have duplicate messages if you follow mckinley.cc/twtxt.txt *and* mckinley.cc/twtxt-25.txt
> Is this something we should support in Pods?
I haven't seen anyone else with two distinct feeds, so a feature like that probably wouldn't be used often.
From a conversation a little while ago:
> Twtxt’s original spec is like taking a 140 character long string of text, loading it into a cannon, and shooting it off into space. I like that idea, which is why I keep a separate feed on my website. Yarn is a whole different concept, it adds a lot of the “social” elements back into twtxt.
Search engine works great, by the way. :)
From a conversation a little while ago:\n> Twtxt’s original spec is like taking a 140 character long string of text, loading it into a cannon, and shooting it off into space. I like that idea, which is why I keep a separate feed on my website. Yarn is a whole different concept, it adds a lot of the “social” elements back into twtxt.\n\nSearch engine works great, by the way. :)
@prologic Woah, an italicized emoji. Don't think I've ever seen that before. The two feeds are different. Most of my original posts are on my website's feed (in 140 characters or less) and I use this one mostly for interacting with others.
I keep two feeds because, while I really enjoy what the dev.twtxt.net spec extensions and yarn.social pods have to offer, I also like the effect of the original twtxt specification.
@prologic Woah, an italicized emoji. Don't think I've ever seen that before. The two feeds are different. Most of my original posts are on my website's feed (in 140 characters or less) and I use this one mostly for interacting with others. \nI keep two feeds because, while I really enjoy what the dev.twtxt.net spec extensions and yarn.social pods have to offer, I also like the effect of the original twtxt specification.
@prologic No, I'm not bothered by it at all. I get enough bots on there trying to exploit old WordPress vulnerabilities and the like that I can't be too mad about a legitimate service doing what it was designed to do.
@prologic No, I'm not bothered by it at all. I get enough bots on there trying to exploit old WordPress vulnerabilities and the like that I can't be too mad about a legitimate service doing what it was designed to do.
@prologic It was my main feed, @mckinley when I had access logs on for a while.
It doesn't take up much bandwidth because it's not actually sending over the file every time. It really just clogs up the log file, and grep -v "/twtxt.txt" access_log
fixes that.
@prologic It was my main feed, @mckinley when I had access logs on for a while.\nIt doesn't take up much bandwidth because it's not actually sending over the file every time. It really just clogs up the log file, and grep -v "/twtxt.txt" access_log
fixes that.
As a user, I think 1-3 times a day would be fine.
As someone who pays to host a twtxt feed, I don't mind what you set it to as long as it's not unreasonably often. It won't really make a difference to me. As of about 6 months ago, 3 different yarn.social pods each request /twtxt.txt (with an If-Modified-Since header) every 5 minutes, 24 / 7. I think I also had a different twtxt client requesting it every 10 minutes.
As a user, I think 1-3 times a day would be fine.\nAs someone who pays to host a twtxt feed, I don't mind what you set it to as long as it's not unreasonably often. It won't really make a difference to me. As of about 6 months ago, 3 different yarn.social pods each request /twtxt.txt (with an If-Modified-Since header) every 5 minutes, 24 / 7. I think I also had a different twtxt client requesting it every 10 minutes.
Not only do they make you connect an account to an unrelated service, they require those invasive permissions. Is that a video conference thing? Jitsi Meet is the way.
Not only do they make you connect an account to an unrelated service, they require those invasive permissions. Is that a video conference thing? Jitsi Meet is the way.
@movq \n\n> Maybe, over time, everything evolves into Usenet.\n\nConvergent evolution, just like how creatures keep evolving to resemble crabs.
@movq
> Maybe, over time, everything evolves into Usenet.
Convergent evolution, just like how creatures keep evolving to resemble crabs.
@prologic I'll have to check out the search engine. Congratulations on the release.
@adi There's a very good chance that it will be fixed eventually. Just keep in mind I have a closet full of old computers (only a small handful of which are listed on my website) and I don't have the time, energy, or soldering skills to get them all running.
@prologic I'll have to check out the search engine. Congratulations on the release.\n@https://twtxt.net/user/adi/twtxt.txt> There's a very good chance that it will be fixed eventually. Just keep in mind I have a closet full of old computers (only a small handful of which are listed on my website) and I don't have the time, energy, or soldering skills to get them all running.
The Model 100 is alive! I opened it up last night. The backup battery is fine and the corrosion on the main battery contacts cleaned off well with vinegar. It fired right up with some fresh AAs. The screen looks good, but the keyboard isn't working. I got sidetracked last night and haven't had time today to resume the project. Maybe later today.\nIn the meantime, here's an exploded view taken from the service manual.
TRS-80 Model 100 exploded view
The Model 100 is alive! I opened it up last night. The backup battery is fine and the corrosion on the main battery contacts cleaned off well with vinegar. It fired right up with some fresh AAs. The screen looks good, but the keyboard isn't working. I got sidetracked last night and haven't had time today to resume the project. Maybe later today.
In the meantime, here's an exploded view taken from the service manual.
TRS-80 Model 100 exploded view
Edit: #TRS80
The Model 100 is alive! I opened it up last night. The backup battery is fine and the corrosion on the main battery contacts cleaned off well with vinegar. It fired right up with some fresh AAs. The screen looks good, but the keyboard isn't working. I got sidetracked last night and haven't had time today to resume the project. Maybe later today.\nIn the meantime, here's an exploded view taken from the service manual.
TRS-80 Model 100 exploded view\nEdit: #TRS80
@brasshopper It's definitely too low. These gimmicky social media services don't last long. I remember one that would take two weeks or something to deliver messages. It was actually meant to mimic letters sent in the mail. I don't know why you wouldn't just send letters at that point, too. Last I checked, the postal service still works.