# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 3
# self = https://watcher.sour.is/conv/pdp7oxq
AI isn’t a shortcut for thinking. In her guide for skeptics, Hilary Gridley reframes AI as a collaborator—not a replacement. Use it like spellcheck for your thoughts. Don’t fear it—iterate with it. Insight improves, speed follows. Full post: https://hils.substack.com/p/the-ai-skeptics-guide-to-ai-collaboration
But it is still a giant inefficient use of resources and energy 🤣
@prologic Since you have to check and double check everything it spits out (without providing sources), I don’t find any of this helpful. It’s like someone’s in the room with you and that person is saying random stuff that might or might not be correct. *At best*, it might spark some new idea in your head and then you follow that idea the traditional way.

Information published on the internet (or anywhere, for that matter) was never guaranteed to be correct. But at least you had a “frame of reference”: “Ah, I read this information about Linux on a blog that usually posts about Windows, so this one single Linux post might not necessarily be correct.” That is completely lost with LLMs. It’s literally all mushed together. 🤷