# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 5
# self = https://watcher.sour.is/conv/flmnaqq
I like this comment on Slashdot in the above link:
>LLMs don't have an understanding of anything. They can only regurgitate derivations of what they've been trained on and can't apply that to something new in the same ways that humans or even other animals can. The models are just so large that the illusion is impressive.
So true.
@eldersnake With enough data and enough computing power you can simulate anything right or create grand illusions that appear to real they're hard to tell 😅 -- But yes, at the end of the day LLM(s) today are just large probabilistic models, stochastic parrots.
@eldersnake With enough data and enough computing power you can simulate anything right or create grand illusions that appear to real they're hard to tell 😅 -- But yes, at the end of the day LLM(s) today are just large probabilistic models, stochastic parrots.
They are however pretty good at auto-complete though. If you wire up Continue.dev with VSCode and a local Ollama powered codeastral model, it's pretty decent. Or if you use the open source friendly Codeium.
They are however pretty good at auto-complete though. If you wire up Continue.dev with VSCode and a local Ollama powered codeastral model, it's pretty decent. Or if you use the open source friendly Codeium.