# I am the Watcher. I am your guide through this vast new twtiverse.
#
# Usage:
# https://watcher.sour.is/api/plain/users View list of users and latest twt date.
# https://watcher.sour.is/api/plain/twt View all twts.
# https://watcher.sour.is/api/plain/mentions?uri=:uri View all mentions for uri.
# https://watcher.sour.is/api/plain/conv/:hash View all twts for a conversation subject.
#
# Options:
# uri Filter to show a specific users twts.
# offset Start index for quey.
# limit Count of items to return (going back in time).
#
# twt range = 1 42
# self = https://watcher.sour.is/conv/gz5euha
After seeing some ChatGPT interactions I believe all doomsday AI scenarios are stupid and I also believe it's impossible for an intelligent creature to create a _creature_ more intelligent than itself.
@adi Interesting view point ๐ค Also welcome back! ๐ค
@adi Interesting view point ๐ค Also welcome back! ๐ค
@adi Interesting view point ๐ค Also welcome back! ๐ค
@adi Interesting view point ๐ค Also welcome back! ๐ค
This is a case of GIGO right? Garbage In, Garbage Out? I mean the hype around these stupid LLM(s) (Large Language Models) are just that, a trained model. I will spit out stuff on what it already has patterns defined for. Right @abucci ? ๐ค (who is more knowledgeable about this than i) -- I have yet to see anyone even come remotely close to the kind of intelligence we se in sci-fi films, this so-called AGI?
This is a case of GIGO right? Garbage In, Garbage Out? I mean the hype around these stupid LLM(s) (Large Language Models) are just that, a trained model. I will spit out stuff on what it already has patterns defined for. Right @abucci ? ๐ค (who is more knowledgeable about this than i) -- I have yet to see anyone even come remotely close to the kind of intelligence we se in sci-fi films, this so-called AGI?
This is a case of GIGO right? Garbage In, Garbage Out? I mean the hype around these stupid LLM(s) (Large Language Models) are just that, a trained model. I will spit out stuff on what it already has patterns defined for. Right @abucci ? ๐ค (who is more knowledgeable about this than i) -- I have yet to see anyone even come remotely close to the kind of intelligence we se in sci-fi films, this so-called AGI?
This is a case of GIGO right? Garbage In, Garbage Out? I mean the hype around these stupid LLM(s) (Large Language Models) are just that, a trained model. I will spit out stuff on what it already has patterns defined for. Right @abucci ? ๐ค (who is more knowledgeable about this than i) -- I have yet to see anyone even come remotely close to the kind of intelligence we se in sci-fi films, this so-called AGI?
@prologic mmk-de @adi ChatGPT is being pushed very hard by OpenAI right now. You're seeing it in the news so much right now because they are spending endless sums of money to get it in front of your face. It's marketing. Nothing more. You should ignore it just like you ignore other advertisements. It's not even GIGO; it's just G.
As far as artificial general intelligence goes, we (humans) don't know what it'll take to make one of those. We don't even know what the word "intelligence" means.
The field of AI has been filled top to bottom, since its inception in the 1950s or so, with the strange belief that you can just break "intelligence" down into small parts--a module that can do math, a module that can play chess, a module that can emit language--and that somehow magically those parts, working together, will exhibit what we consider to be intelligent behavior. Minsky's book/theory *The Society Of Mind* articulated that view clearly in the 1980s. I, personally, think it's transparently a bunch of horseshit, but who am I ๐คท Anyway, that's the "dream" these people are chasing, I think.
There's a much longer history here of humanity trying to breathe life into inanimate matter. The notion of a golem from Jewish folklore has this quality, for instance, as does the story of Galatea, an ivory statue made by Pygmalion that comes to life.
So dig a bit deeper, and what I think you see is that these Silicon Valley types are so ignorant of the humanities that they are re-creating, in ignorance, myths and legends and stories from long ago, except with computers as the main characters.
@prologic mmk-de @adi It's a case of all kinds of things. ChatGPT is being pushed very hard by OpenAI right now. You're seeing it in the news so much right now because they are spending endless sums of money to get it in front of your face. It's marketing. Nothing more. You should ignore it just like you ignore other advertisements. It's not even GIGO; it's just G.
As far as artificial general intelligence goes, we (humans) don't know what it'll take to make one of those. We don't even know what the word "intelligence" means.
The field of AI has been filled top to bottom, since its inception in the 1950s or so, with the strange belief that you can just break "intelligence" down into small parts--a module that can do math, a module that can play chess, a module that can emit language--and that somehow magically those parts, working together, will exhibit what we consider to be intelligent behavior. Minsky's book/theory *The Society Of Mind* articulated that view clearly in the 1980s. I, personally, think it's transparently a bunch of horseshit, but who am I ๐คท Anyway, that's the "dream" these people are chasing, I think.
There's a much longer history here of humanity trying to breathe life into inanimate matter. The notion of a golem from Jewish folklore has this quality, for instance, as does the story of Galatea, an ivory statue made by Pygmalion that comes to life.
So dig a bit deeper, and what I think you see is that these Silicon Valley types are so ignorant of the humanities that they are re-creating, in ignorance, myths and legends and stories from long ago, except with computers as the main characters.
@prologic @adi Remember that OpenAI exists to hype itself. That's it's real business. When I was toying with my startup back in 2016, OpenAI was hyping their language models as something that would revolutionize life as we know it. They even went so far as to claim it was so revolutionary they were hesitating to release it because they were afraid of the dangers it posed. Which was horseshit, and they released GPT soon after.
ChatGPT is the same game, played with the same toys. Hype. You and your brain will be better off if you tune it out, ignore it, block it, filter it out of your feeds, etc, just like you would with any other type of marketing hype about any other product.
@abucci You are right of course. I don't think we can consider anything thus are to be remotely close to "intelligence", it actually frustrates me that we can call these fields "AI", we should call them what they are, "machine learning", they're just fancy algorithms many of which are pretty good at "pattern matching".
As for what we define as "intelligence", fucked if I know ๐
I doubt anyone else can define this either. I tend to believe that until we figure out how to create "something" that can have a sense of self-awareness and self-growth and a way to expand and "reprogram" itself, we'll never get very far. Really "evolutionary life" simulations or "artificial life simulations" are much closer I think.
@abucci You are right of course. I don't think we can consider anything thus are to be remotely close to "intelligence", it actually frustrates me that we can call these fields "AI", we should call them what they are, "machine learning", they're just fancy algorithms many of which are pretty good at "pattern matching".
As for what we define as "intelligence", fucked if I know ๐
I doubt anyone else can define this either. I tend to believe that until we figure out how to create "something" that can have a sense of self-awareness and self-growth and a way to expand and "reprogram" itself, we'll never get very far. Really "evolutionary life" simulations or "artificial life simulations" are much closer I think.
@abucci You are right of course. I don't think we can consider anything thus are to be remotely close to "intelligence", it actually frustrates me that we can call these fields "AI", we should call them what they are, "machine learning", they're just fancy algorithms many of which are pretty good at "pattern matching".
As for what we define as "intelligence", fucked if I know ๐
I doubt anyone else can define this either. I tend to believe that until we figure out how to create "something" that can have a sense of self-awareness and self-growth and a way to expand and "reprogram" itself, we'll never get very far. Really "evolutionary life" simulations or "artificial life simulations" are much closer I think.
@abucci You are right of course. I don't think we can consider anything thus are to be remotely close to "intelligence", it actually frustrates me that we can call these fields "AI", we should call them what they are, "machine learning", they're just fancy algorithms many of which are pretty good at "pattern matching".
As for what we define as "intelligence", fucked if I know ๐
I doubt anyone else can define this either. I tend to believe that until we figure out how to create "something" that can have a sense of self-awareness and self-growth and a way to expand and "reprogram" itself, we'll never get very far. Really "evolutionary life" simulations or "artificial life simulations" are much closer I think.
Say what you want, I speak for myself. People way much, much, much smarter than me are working on this. Not one, but many. AI will have its useโwhich it will increaseโand it will get better. It is barely on its infancy.
About perceived impossibilities, we are very good at achieving things that previously seemed impossible.
@bender True and good points, my only gripe though we should call it what it _really_ is ๐
@bender True and good points, my only gripe though we should call it what it _really_ is ๐
@bender True and good points, my only gripe though we should call it what it _really_ is ๐
@bender True and good points, my only gripe though we should call it what it _really_ is ๐
@bender @prologic The "smarter people than I am are doing this" argument is a way of giving up.
AI is not in its infancy. Alan Turing wrote about what we now call AI in the 1940s/1950s, almost three quarters of a century ago. Some of what he wrote is how it still works today, in spite of all the "smart people working on it".
Back then, the first prototype transistor was being created. It was the size of your fist, more or less. Now, we can cram tens of billions of them into a square centimeter of silicon. If AI had "progressed" similarly, we'd have had walking talking robots around us long ago. Instead we have toys and marketing hype.
It's important to see it for what it is and not accept the marketing putched. We've all been inundated with technoutopian visions that never come to fruition, over and over and over again. It's time to be skeptical, and to demand better.
@prologic I tend to agree with you, and it's one of the reasons why I did evolutionary computation in my PhD. My PhD advisor was big on the idea that even though we don't know what life or intelligence is well enough to make it from scratch, maybe we can set up an artificial world in which (simulated) life can "emerge" from the primordial soup, so to speak. I thought that idea was pretty compelling and I worked on it for awhile. It's why I, too, and frustrated by the term "AI" and how it's slapped onto anything these days. Some of the stuff that people call AI right now would have been called "an algorithm" or "a computer program" not so long ago ๐
@abucci Oh I don't accept the marketing hype at all. The thing that I always fall back on is the insane amount of power that it takes to runs these fuckings tupid ass models that are nothing more than (okay admittedly a bit fancier than the ones a few decades ago, but mostly based on the same mechanics) "algorithms" that take data in and spit data out. The shocking part for me is comparing the insane power and energy requirements of even the largest "AI" models in the world and comparing that with the energy/power requirements of running (for example) the brain of a rat.
@abucci Oh I don't accept the marketing hype at all. The thing that I always fall back on is the insane amount of power that it takes to runs these fuckings tupid ass models that are nothing more than (okay admittedly a bit fancier than the ones a few decades ago, but mostly based on the same mechanics) "algorithms" that take data in and spit data out. The shocking part for me is comparing the insane power and energy requirements of even the largest "AI" models in the world and comparing that with the energy/power requirements of running (for example) the brain of a rat.
@abucci Oh I don't accept the marketing hype at all. The thing that I always fall back on is the insane amount of power that it takes to runs these fuckings tupid ass models that are nothing more than (okay admittedly a bit fancier than the ones a few decades ago, but mostly based on the same mechanics) "algorithms" that take data in and spit data out. The shocking part for me is comparing the insane power and energy requirements of even the largest "AI" models in the world and comparing that with the energy/power requirements of running (for example) the brain of a rat.
@abucci Oh I don't accept the marketing hype at all. The thing that I always fall back on is the insane amount of power that it takes to runs these fuckings tupid ass models that are nothing more than (okay admittedly a bit fancier than the ones a few decades ago, but mostly based on the same mechanics) "algorithms" that take data in and spit data out. The shocking part for me is comparing the insane power and energy requirements of even the largest "AI" models in the world and comparing that with the energy/power requirements of running (for example) the brain of a rat.
Basically what I'm trying to say is this... If it takes multiple Gigawatts of power to run even the "smarter" and "most useful" AI models today, we're fucked.
Basically what I'm trying to say is this... If it takes multiple Gigawatts of power to run even the "smarter" and "most useful" AI models today, we're fucked.
Basically what I'm trying to say is this... If it takes multiple Gigawatts of power to run even the "smarter" and "most useful" AI models today, we're fucked.
Basically what I'm trying to say is this... If it takes multiple Gigawatts of power to run even the "smarter" and "most useful" AI models today, we're fucked.
Wake me up when we can run these LLM(s) and similar models on the energy requirements of a Raspberry Pi ๐คฃ
Wake me up when we can run these LLM(s) and similar models on the energy requirements of a Raspberry Pi ๐คฃ
Wake me up when we can run these LLM(s) and similar models on the energy requirements of a Raspberry Pi ๐คฃ
Wake me up when we can run these LLM(s) and similar models on the energy requirements of a Raspberry Pi ๐คฃ
@prologic ha, exactly! If you could do the kind of stuff that ChatGPT does on a Raspberry Pi of a few years ago, that's be an amazing accomplishment.
As it stands, they're just throwing raw compute power and vast volumes of data at the problem. Of course interesting shit is going to pop out. You know you're getting somewhere when the capabilities increase while the power consumption stays the same or decreases.
@abucci True, there's no argument there will be some "utility' from these LLM(s) -- It will be even more useful when most folks can run them (maybe at a smaller scale) on "edge" computing on modest hardware.
@abucci True, there's no argument there will be some "utility' from these LLM(s) -- It will be even more useful when most folks can run them (maybe at a smaller scale) on "edge" computing on modest hardware.
@abucci True, there's no argument there will be some "utility' from these LLM(s) -- It will be even more useful when most folks can run them (maybe at a smaller scale) on "edge" computing on modest hardware.
@abucci True, there's no argument there will be some "utility' from these LLM(s) -- It will be even more useful when most folks can run them (maybe at a smaller scale) on "edge" computing on modest hardware.
How come "Speed Bump" is not an "AI"?!
My view is, if we ever get to a point that a true โAIโ can be created, something that can entirely learn new concepts by itself and exponentially expand itโs own knowledge base without being told to do so (basically what I would consider sentient at that point), humans wonโt know about it until itโs significantly too late to stop it. I think thatโs where the general hysteria comes from, but for now Iโll use these LLMs to spit out lists of cyber security controls to make my work _that_ little bit easier