# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 195598
# self = https://watcher.sour.is?offset=192746
# next = https://watcher.sour.is?offset=192846
# prev = https://watcher.sour.is?offset=192646
I finally have my new (top-secret) twtxt client in a working state. Next comes the deployment, which I hope to finish tonight. Release date: TBD. Stay tuned!
I finally have my new (top-secret) twtxt client in a working state. Next comes the deployment, which I hope to finish tonight. Release date: TBD. Stay tuned!
@prologic haven't had too much time to really try it out yet ^^' i'm um too busy staring at code i wrote while sleep deprived and wondering why i did the things i did, while sleep deprived \@.@
Welp, my rent's gone out and my student loan won't be in for another week, so I'm not spending anything for a while. How's everyone else's September going?
Chances are the database bought wasn't cheap at all and was aold by some scam company that probably ripped them from six figures or more for a database that's full of rubbish. 🤣
That is obviously completely wrong. But I can explain it. Some *years* ago, I screwed up my nginx rewrite rules, and that’s how these broken URLs came to be.
It all redirects to /git now, which is why that endpoint sees so much traffic lately.
But what does that mean? Why do they start there? I can only speculate that this company bought an old database of web links and they use that to start crawling. And it was probably a cheap one, because these redirects have been fixed for quite a long time now.
That is obviously completely wrong. But I can explain it. Some *years* ago, I screwed up my nginx rewrite rules, and that’s how these broken URLs came to be.
It all redirects to /git now, which is why that endpoint sees so much traffic lately.
But what does that mean? Why do they start there? I can only speculate that this company bought an old database of web links and they use that to start crawling. And it was probably a cheap one, because these redirects have been fixed for quite a long time now.
@prologic I’m doing that now as well, but I don’t think this is a good solution. This is going to hurt “self-hosting” in the long run: I cannot afford true self-hosting where I actually do host everything here at home – instead, I must use a cloud provider / VPS for that. It is only a matter of time until *my* provider starts doing AI shit as well (or rather, the customers do it) and then what? I get blocked, e.g. I can’t send email to (some) people anymore. This is already bad and it’s going to get worse.
@prologic I’m doing that now as well, but I don’t think this is a good solution. This is going to hurt “self-hosting” in the long run: I cannot afford true self-hosting where I actually do host everything here at home – instead, I must use a cloud provider / VPS for that. It is only a matter of time until *my* provider starts doing AI shit as well (or rather, the customers do it) and then what? I get blocked, e.g. I can’t send email to (some) people anymore. This is already bad and it’s going to get worse.
@movq I heard about a defence against badly-behaved crawlers a while ago: an HTML zip bomb. This post explains how to do it. Essentially, web servers can serve compressed versions of webpages and, with a little trickery, one can replace the compressed page with a different file. After that, any bot that tries to crawl the page will instead download and unpack a zip bomb that will cause it to crash.
@prologic Yeah, I’ve blocked some large subnets now (most likely overblocking a lot of stuff) and it has died down.
I’m not looking forward to doing this on a regular basis. This is supposed to be a fun hobby – and it was, for many years. Maybe that time is just over.
@prologic Yeah, I’ve blocked some large subnets now (most likely overblocking a lot of stuff) and it has died down.
I’m not looking forward to doing this on a regular basis. This is supposed to be a fun hobby – and it was, for many years. Maybe that time is just over.
“But all your stuff is MIT licensed! They are allowed to do that!”
Haha. As if they would care. They crawl everything they get their hands on.
Besides, that’s not true, the license states that the copyright notice must be retained. “AI” breaks that. They incorporate my code and my articles in their product and make it appear as if it was their work.
“But all your stuff is MIT licensed! They are allowed to do that!”
Haha. As if they would care. They crawl everything they get their hands on.
Besides, that’s not true, the license states that the copyright notice must be retained. “AI” breaks that. They incorporate my code and my articles in their product and make it appear as if it was their work.
1. The load will become a problem at some point. 2. These crawlers and the current “AI” in general are breaking the rules. *I* am supposed to be paying for every little thing, *I* get sued for “piracy”. But apparently, these rules only apply to me. If I had more money, I could break them. Fuck that. 3. I simply don’t want it. Period.
1. The load will become a problem at some point. 2. These crawlers and the current “AI” in general are breaking the rules. *I* am supposed to be paying for every little thing, *I* get sued for “piracy”. But apparently, these rules only apply to me. If I had more money, I could break them. Fuck that. 3. I simply don’t want it. Period.
This probably means that I can no longer host my own website. I don’t want to deploy something like Anubis, because that ruins the whole thing: I want it to be accessible from ancient browsers, like OS/2 or Windows 3.11.
I’ll keep an eye on it for a while. Maybe try to block some IPs.
Sooner or later, I’ll take the website down and shift everything to Gopher.
This probably means that I can no longer host my own website. I don’t want to deploy something like Anubis, because that ruins the whole thing: I want it to be accessible from ancient browsers, like OS/2 or Windows 3.11.
I’ll keep an eye on it for a while. Maybe try to block some IPs.
Sooner or later, I’ll take the website down and shift everything to Gopher.
The bots have begun to access my website way more often. I’m getting about 120k hits on https://www.uninformativ.de/git/ now in a couple of hours.
They don’t cache anything, probably on purpose.
It comes in waves. I get about 100 hits (all at once) on that /git endpoint, all from different IPs. Then it takes a moment until I get another wave of about 500-1000 requests (all at once) where they do HEAD requests on some of the paths below /git. I assume they did a GET earlier and are now checking if something has changed.
The bots have begun to access my website way more often. I’m getting about 120k hits on https://www.uninformativ.de/git/ now in a couple of hours.
They don’t cache anything, probably on purpose.
It comes in waves. I get about 100 hits (all at once) on that /git endpoint, all from different IPs. Then it takes a moment until I get another wave of about 500-1000 requests (all at once) where they do HEAD requests on some of the paths below /git. I assume they did a GET earlier and are now checking if something has changed.