# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 1
# self = https://watcher.sour.is/conv/opufrfa
I wonder if bento has slightly missed the key to being a total genius approach to host management. ok hear me out. each node periodically pulls configuration from a coordination node that hosts a binary cache. the admin may make changes and pre-build them maybe kick off an update task manually if they want, but the point is there's an automated checkin. for my case, the device I have available for coordination isn't really capable of hosting a binary cache for any of my other machines. the nix store for my dev machine is larger than the entire disk of the coordinator! and due to the yearly heat my best machine can't be reliably powered on all the time. so i started thinking to myself, "self, what if instead of having a central coordinator we fetched configuration from a reliable git mirror (maybe git+torrent some day) and consume it as a flake. the source could even be swapped out using a flake registry (so you don't even have to commit to self-hosting anything other than a json file). then managed hosts only have to be setup to consume the registry and the shared flake (which registers the update agent) and DONE?"