# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 11
# self = https://watcher.sour.is/conv/rkioi3a
@prologic sone of those language models take tens or hundreds of days to train on an NVIDIA A100 and use the electricity it would take to run a town. These results aren't even replicable for all but the largest organizations.
@abucci I don't even care about "training" the models, I just want to be able to run them on my own hardware / infrastructure πŸ˜…
@abucci I don't even care about "training" the models, I just want to be able to run them on my own hardware / infrastructure πŸ˜…
@prologic the thing is, to do anything specialized you get much better results if you train your own model, usually. So besides not being able to realistically run existing models on consumer-grade hardware, you have very little hope of customizing and fine tuning models to your own use case by training them. It stinks.
@abucci I'll bet one of these _could_ do the trick: AGX Inference Server - Connect Tech Inc. -- Unfortunately they're about $35-40k USD a piece πŸ˜… -- There are some other alternatives this company makes that are a bit more affordable for the average Jo -- and... another company makes a similar embedded product that's also quite affordable: Vision Box AI features NVIDIA’s 64GB AGX Orin GPU module
@abucci I'll bet one of these _could_ do the trick: AGX Inference Server - Connect Tech Inc. -- Unfortunately they're about $35-40k USD a piece πŸ˜… -- There are some other alternatives this company makes that are a bit more affordable for the average Jo -- and... another company makes a similar embedded product that's also quite affordable: Vision Box AI features NVIDIA’s 64GB AGX Orin GPU module
Haven't found a 1RU Jetson AGX model I like yet, or I would have bought one already to fit into my Mills DC πŸ˜…
Haven't found a 1RU Jetson AGX model I like yet, or I would have bought one already to fit into my Mills DC πŸ˜…
@prologic only $35k? wow I'll take two!
@abucci Haha πŸ˜† I mean my 1RU Hypervisor nodes are about $5k each 🀣 -- And I've got 3 of them!
@abucci Haha πŸ˜† I mean my 1RU Hypervisor nodes are about $5k each 🀣 -- And I've got 3 of them!