# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 1
# self = https://watcher.sour.is/conv/tlfu6yq
Ask HN: What is the current (Apr. 2024) gold standard of running an LLM locally?**
There are many options and opinions about, what is currently the recommended approach for running an LLM locally (e.g., on my 3090 24Gb)? Are options ‘idiot proof’ yet?
* * *
Comments URL: https://news.ycombinator.com/item?id=39893142
Points: 15
\\# Comments: 3 ⌘ Read more*