# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 11
# self = https://watcher.sour.is/conv/rkioi3a
@prologic sone of those language models take tens or hundreds of days to train on an NVIDIA A100 and use the electricity it would take to run a town. These results aren't even replicable for all but the largest organizations.
@abucci I don't even care about "training" the models, I just want to be able to run them on my own hardware / infrastructure π
@abucci I don't even care about "training" the models, I just want to be able to run them on my own hardware / infrastructure π
@prologic the thing is, to do anything specialized you get much better results if you train your own model, usually. So besides not being able to realistically run existing models on consumer-grade hardware, you have very little hope of customizing and fine tuning models to your own use case by training them. It stinks.
Haven't found a 1RU Jetson AGX model I like yet, or I would have bought one already to fit into my Mills DC π
Haven't found a 1RU Jetson AGX model I like yet, or I would have bought one already to fit into my Mills DC π
@abucci Haha π I mean my 1RU Hypervisor nodes are about $5k each π€£ -- And I've got 3 of them!
@abucci Haha π I mean my 1RU Hypervisor nodes are about $5k each π€£ -- And I've got 3 of them!