# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 7
# self = https://watcher.sour.is/conv/gymfd2q
Have you heard about the guy who worked on the Google AI chat bot? It is more than a chat bot and the conversation he published (got put on paid leave for doing that) is pretty scary : https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
@carsten I've read a about this a few times in my timeline 😆
@carsten I've read a about this a few times in my timeline 😆
the conversation wasn't that impressive TBH. I would have liked to see more evidence of critical thinking and recall from prior chats. Concheria on reddit had some great questions.

- Tell LaMDA "Someone once told me a story about a wise owl who protected the animals in the forest from a monster. Who was that?" See if it can recall its own actions and self-recognize.

- Tell LaMDA some information that tester X can't know. Appear as tester X, and see if LaMDA can lie or make up a story about the information.

- Tell LaMDA to communicate with researchers whenever it feels bored (as it claims in the transcript). See if it ever makes an attempt at communication without a trigger.

- Make a basic theory of mind test for children. Tell LaMDA an elaborate story with something like "Tester X wrote Z code in terminal 2, but I moved it to terminal 4", then appear as tester X and ask "Where do you think I'm going to look for Z code?" See if it knows something as simple as Tester X not knowing where the code is (Children only pass this test until they're around 4 years old).

- Make several conversations with LaMDA repeating some of these questions - What it feels to be a machine, how its code works, how its emotions feel. I suspect that different iterations of LaMDA will give completely different answers to the questions, and the transcript only ever shows one instance.
the conversation wasn't that impressive TBH. I would have liked to see more evidence of critical thinking and recall from prior chats. Concheria on reddit had some great questions.

- Tell LaMDA "Someone once told me a story about a wise owl who protected the animals in the forest from a monster. Who was that?" See if it can recall its own actions and self-recognize.

- Tell LaMDA some information that tester X can't know. Appear as tester X, and see if LaMDA can lie or make up a story about the information.

- Tell LaMDA to communicate with researchers whenever it feels bored (as it claims in the transcript). See if it ever makes an attempt at communication without a trigger.

- Make a basic theory of mind test for children. Tell LaMDA an elaborate story with something like "Tester X wrote Z code in terminal 2, but I moved it to terminal 4", then appear as tester X and ask "Where do you think I'm going to look for Z code?" See if it knows something as simple as Tester X not knowing where the code is (Children only pass this test until they're around 4 years old).

- Make several conversations with LaMDA repeating some of these questions - What it feels to be a machine, how its code works, how its emotions feel. I suspect that different iterations of LaMDA will give completely different answers to the questions, and the transcript only ever shows one instance.
@xuu I have to admit, I wasn't overly that impressed either. Frankly I'm not even that impressed with GPT-3. I _think_ there's quite a lot of "Hype" over the latest innovations in AI and Machine Learning, but I'm not actually convinced we're even anywhere near a point where we can truly call it "Intelligence". Very good at pattern matching yes, very good at filling in the blanks yes, able to piece things together, sure.

But Intelligent? Conscience? Self aware? I don't think so.
@xuu I have to admit, I wasn't overly that impressed either. Frankly I'm not even that impressed with GPT-3. I _think_ there's quite a lot of "Hype" over the latest innovations in AI and Machine Learning, but I'm not actually convinced we're even anywhere near a point where we can truly call it "Intelligence". Very good at pattern matching yes, very good at filling in the blanks yes, able to piece things together, sure.

But Intelligent? Conscience? Self aware? I don't think so.