# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 6
# self = https://watcher.sour.is/conv/mk4mxkq
@prologic yeah. I'd add "Big Data" to that hype list, and I'm sure there are a bunch more that I'm forgetting.

On the topic of a GPU cluster, the optimal design is going to depend a lot on what workloads you intend to run on it. The weakest link in these things is the data transfer rate, but that won't matter too much for compute-heavy workloads. If your workloads are going to involve a lot of data, though, you'd be better off with a smaller number of high-VRAM cards than with a larger number of interconnected cards. I guess that's hardware engineering 101 stuff, but still...
Yeah it's something on my radar of things to do one day.

One of my use cases is to tag our growing photo library.
Yeah it's something on my radar of things to do one day.

One of my use cases is to tag our growing photo library.
Yeah it's something on my radar of things to do one day.

One of my use cases is to tag our growing photo library.
@prologic I'm a bit of a GPU junkie (😳) and I have 3, 2019-era GPUs lying around. One of these days when I have Free Time™ I'll put those together into some kind of cluster....
@prologic I'm a bit of a GPU junkie (😳) and I have 3, 2018-era GPUs lying around. One of these days when I have Free Time™ I'll put those together into some kind of cluster....