# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 8
# self = https://watcher.sour.is/conv/jsymjza
@bender I mean @movq is basisally spot on! These pieces of crap are just large expensive statistical models that occasionally spit out something sort of valid for some reason, but otherwise are the most useless idiotic and inefficient things I've ever seen or used.
@bender I mean @movq is basisally spot on! These pieces of crap are just large expensive statistical models that occasionally spit out something sort of valid for some reason, but otherwise are the most useless idiotic and inefficient things I've ever seen or used.
@prologic one can understand how LLMs work, and even train one (with the needed hardware), if interested (that, of course, will not happen if uninterested to begin with).

I think LLMs, and whatever they morph into, will be useful. They are also not going away. As for the energy consumption, well, some say it is a good thing, as it will speed up the transition to green energy usage. :-)
@bender Honest question (since I obviously know very little about this): Can you debug this? Let’s take the strawberry example. Can you pinpoint which bytes in your data/model/code/whatever are responsible for the answer “there are 2 Rs”, and then go ahead and fix them without affecting anything else?
@bender Honest question (since I obviously know very little about this): Can you debug this? Let’s take the strawberry example. Can you pinpoint which bytes in your data/model/code/whatever are responsible for the answer “there are 2 Rs”, and then go ahead and fix them without affecting anything else?
@bender Honest question (since I obviously know very little about this): Can you debug this? Let’s take the strawberry example. Can you pinpoint which bytes in your data/model/code/whatever are responsible for the answer “there are 2 Rs”, and then go ahead and fix them without affecting anything else?
@bender Honest question (since I obviously know very little about this): Can you debug this? Let’s take the strawberry example. Can you pinpoint which bytes in your data/model/code/whatever are responsible for the answer “there are 2 Rs”, and then go ahead and fix them without affecting anything else?
@movq I can't debug it, I don't have the knowledge to do it. One thing I have noticed on LLMs is they don't excel at mathematics (duh!). They are better at providing information that already exists, sometimes even in a logical way, or generating artistic outcomes (poems, songs, letters, images, videos, etc.). They can even generate decent code! (granted, not to be blindly trusted, but very helpful for a programmer, as it could speed up development process).

I have found other issues, like asking it who was the youngest person to ever been elected president of the United States. It often replies with wrong information to things that should be trivial knowledge. Every once in a while fully hallucinates.

The technology is rapidly evolving. I wouldn't just dis it, and simply call it "pieces of crap", or worthless things. There has been other things/technologies/stuff that worked/performed/functioned bad in their infancy, but then get better, and more useful, over time.