# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 2
# self = https://watcher.sour.is/conv/fx2a2tq
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

> Human communication is oriented towards making sense out of what others are saying or writing, and so we have a strong tendency to find coherence and meaning even when they aren’t there. In the case of text produced by an LM, they aren’t there. A LM knows nothing more than probabilistic information about sequences of words in the corpus it was trained on. There is no communicative goal, no genuine meaning at all behind the text it produces: it is a stochastic parrot.