It's so fucked up it's hard to put into words how fucked up it is. And this isn't the only bad idea who bangs on about!
> Taking Jordan Peterson asn an example, the only thing he “preaches” (if you want to call it that) is to be honest with yourself and to take responsibility.
This is simply untrue. Read the articles I posted, seriously.
In a tweet in one of the articles I posted, Peterson states there is no white supremacy in Canada. This is blatantly false. It is disinformation. Peterson has made statements that rape is OK (he uses "fancy" language like "women should be naturally converted into mothers" but unpack that a bit--what he means is legalized rape followed by forced conception). He is openly anti-LGBTQ and refuses to use peoples' preferred pronouns. He seems to believe that women who wear makeup at work are asking to be sexually harassed.
He's using his platform in academia to pretend that straight, white men are somehow the most aggrieved group in the world and everyone else is just whining and can get fucked. The patron saint of Men's Rights Activists and incels. I find him odious.
- 12 Reasons Why No One Should Ever Listen to Jordan Peterson Ever Again
- Why Jordan Peterson Is Always Wrong
- Here's why Jordan Peterson is the f*cking worst.: "his ideology quickly morphed into one that reinforces hatred, discrimination, and the oppression of marginalized groups"
- ANGRY WHITE MEN MAR. 30, 2016 A History of Piers Morgan’s Terrible Opinions
- Shut Up, Piers
- Piers Morgan Is Now an Asshole of Record-Breaking Proportions
You're posting Piers Morgan/Jordan Peterson videos lmao???
> It's annoying to see millions of dollars thrown at making more-or-less literal dupes of internet
companies that everyone is already using begrudgingly and with diminishing emotional returns. It's maybe more frustrating to realize that the goals of these companies is the same as their predecessors, which is to
make the internet smaller.
I was never contacted to ask if I was OK with Google extracting a summary of my blog post and sticking it on the web site. There is a very clear copyright designation at the bottom of each page, including that one. So, by putting their own brand over my text, they violated my copyright. Straightforward theft right there.

No, they can't possibly be that foolish. They are going to be doing some amount of content moderation. Just not of Nazis, fascists, or far right reactionaries. Which to me means they want that content on there.
No, they can't possibly be that foolish. They are going to be doing some amount of content moderation. Just not of Nazis, fascists, or far right reactionaries. Which to me means they want that content on there.
CEO is a cryptocurrency fool, as is Jack Dorsey, so I don't expect much from it. Then again I'm old and refuse to join any new hotness so take my curmudgeonly take with a grain of salt.
I read somewhere or another that the "decentralization" is only going to be there so that they can push content moderation onto users. They will happily welcome Nazis and fascists, leaving it up to end users to block those instances.
I wonder how they plan to handle the 4chan-level stuff, since that will surely come.
CEO is a cryptocurrency fool, as is Jack Dorsey, so I don't expect much from it. Then again I'm old and refuse to join any new hotness so take my curmudgeonly opinions with a grain of salt.
I read somewhere or another that the "decentralization" is only going to be there so that they can push content moderation onto users. They will happily welcome Nazis and fascists, leaving it up to end users to block those instances.
I wonder how they plan to handle the 4chan-level stuff, since that will surely come.
CEO is a cryptocurrency fool, as is Jack Dorsey, so I don't expect much from it. Then again I'm old and refuse to join any new hotness so take my curmudgeonly opinions with a grain of salt.
I read somewhere or another that the "decentralization" is only going to be there so that they can push content moderation onto users. They will happily welcome Nazis and fascists, leaving it up to end users to block those instances.
I wonder how they plan to handle the 4chan-level stuff, since that will surely come.
> I say “ostensibly decentralized”, because BlueSky’s (henceforth referred to as “BS” here) decentralization is a similar kind of decentralization as with cryptocurrencies: sure, you can run your own node (in BS case: “personal data servers”), but that does not give you basically any meaningful agency in the system.
I don't know why anyone would want to use this crap. It's the same old same old and it'll end up the same old way.

Other than access to the data behind it, Microsoft has nothing special that allows it to create something like CoPilot. The technology behind it has been around for at least a decade. There could be a "public" version of this same tool made by a cooperating group of people volunteering, "leasing", or selling their source code into it. There could likewise be an ethically-created corporate version. Such a thing would give individual developers or organizations the choice to include their code in the tool, possibly for a fee if that's something they want or require. The creators of the tool would have to acknowledge that they have suppliers--the people who create the code that makes their tool possible--instead of simply stealing what they need and pretending that's fine.
This era we're living through, with large companies stomping over all laws and regulations, blatantly stealing other people's work for their own profit, cannot come to an end soon enough. It is destroying innovation, and we all suffer for that. Having one nifty tool like CoPilot that gives a bit of convenience is nowhere near worth the tremendous loss that Microsoft's actions in this instace are creating for everyone.
In short, it's a gigantic resource-grab. They're greedy assholes taking advantage of the hard work of millions of people without giving a single cent back to any of them. I hope they're sued so often that this product is destroyed.
Are you really so desperate to make this point thst you're citing _Quora_??? Believe what you want to believe.
> There is (I assure you there will be, don’t know what it is yet…) a price to be paid for this convenience.
Exactly prologic, and that's why I'm negative about these sorts of things. I'm almost 50, I've been around this tech hype cycle a bunch of times. Look at what happened with Facebook. When it first appeared, people loved it and signed up and shared incredibly detailed information about themselves on it. Facebook made it very easy and convenient for almost anyone, even people who had limited understanding of the internet or computers, to get connected with their friends and family. And now here we are today, where 80% of people in surveys say they don't trust Facebook with their private data, where they think Facebook commits crimes and should be broken up or at least taken to task in a big way, etc etc etc. Facebook has been fined many billions of dollars and faces endless federal lawsuits in the US alone for its horrible practices. Yet Facebook is still exploitative. It's a societal cancer.
All signs suggest this generative AI stuff is going to go exactly the same way. That is the inevitable course of these things in the present climate, because the tech sector is largely run by sociopathic billionaires, because the tech sector is not regulated in any meaningful way, and because the tech press / tech media has no scruples. Some new tech thing generates hype, people get excited and sign up to use it, then when the people who own the tech think they have a critical mass of users, they clamp everything down and start doing whatever it is they wanted to do from the start. They'll break laws, steal your shit, cause mass suffering, who knows what. They won't stop until they are stopped by mass protest from us, and the government action that follows.
That's a huge price to pay for a little bit of convenience, a price we pay and continue to pay for decades. We all know better by now. Why do we keep doing this to ourselves? It doesn't make sense. It's insane.
> I have to write so many emails to so many idiots who have no idea what they are doing
So it sounds to me like the pressure is to reduce how much time you waste on idiots, which to my mind is a very good reason to use a text generator! I guess in that case you don't mind too much whether the company making the AI owns your prompt text?
I'd really like to see tools like this that you can run on your desktop or phone, so they don't send your hard work off to someone else and give a company a chance to take it from you.
(1) You go to the store and buy a microwave pizza. You go home, put it in the microwave, heat it up. Maybe it's not quite the way you like it, so you put some red pepper on it, maybe some oregano.
Are you a pizza chef? No. Do we know what your cooking is like? Also no.
(2) You create a prompt for StableDiffusion to make a picture of an elephant. What pops out isn't quite to your liking. You adjust the prompt, tweak it a bunch, till the elephant looks pretty cool.
Are you an artist? No. Do we know what your art is like? Also no.
The elephant is "fake art" in a similar sense to how a microwave pizza is "fake pizza". That's what I meant by that word. The microwave pizza is a sort of "simulation of pizza", in this sense. The generated elephant picture is a simulation of art, in a similar sense, though it's even worse than that and is probably more of a simulacrum of art since you can't "consume" an AI-generated image the way you "consume" art.
A computer has no inner life, no feelings, no experience of the world. It is not sentient. It has no life. There's nothing "in" there for it to express. It's just generating pixels in patterns we've learned to recognize. These AI technologies are carefully crafted to fool people into experiencing the things they experience when they look at human-made art, but it is an empty experience.
A computer has no inner life, no feelings, no experience of the world. It is not sentient. It has no life. There's nothing "in" there for it to express. It's just generating pixels in patterns we've learned to recognize. These AI technologies are carefully crafted to fool people into experiencing the things they experience when they look at human-made art, but it is an empty experience.
Terrifying. Elasticsearch is celebrating that they're going to send your private data to OpenAI? No way.
On the topic of a GPU cluster, the optimal design is going to depend a lot on what workloads you intend to run on it. The weakest link in these things is the data transfer rate, but that won't matter too much for compute-heavy workloads. If your workloads are going to involve a lot of data, though, you'd be better off with a smaller number of high-VRAM cards than with a larger number of interconnected cards. I guess that's hardware engineering 101 stuff, but still...
Two things about that:
1. No you're not. If you have good soft skills (good communication, show up on time, general time management) then you're already in excellent shape. No AI can do that stuff, and for that alone no AI can replace people
2. This rhetoric is coming directly from the billionaires who are laying off tech people by the 100s of thousands as part of the class war they've been conducting against all working people since the 1940s. They want you to believe that you have to scramble and claw over one another to learn the "AI" that they're forcing onto the world, so that you stop honing the skills that matter (see #1) and are easier to obsolete later. Don't fall for it. It's far from clear how this will shake out once governments get off their asses and start regulating this stuff, by the way--most of these "AI" tools are blatantly breaking copyright and other IP laws, and some day that'll catch up with them.
That said, it is helpful to know thy enemy.
Two things about that:
1. No you're not. If you have good soft skills (good communication, show up on time, general time management) then you're already in excellent shape
2. This rhetoric is coming directly from the billionaires who are laying off tech people by the 100s of thousands as part of the class war they've been conducting against all working people since the 1940s. They want you to believe that you have to scramble and claw over one another to learn the "AI" that they're forcing onto the world, so that you stop honing the skills that matter (see #1) and are easier to obsolete later. Don't fall for it.
That said, it is helpful to know thy enemy.
Two things about that:
1. No you're not. If you have good soft skills (good communication, show up on time, general time management) then you're already in excellent shape. No AI can do that stuff, and for that alone no AI can replace people
2. This rhetoric is coming directly from the billionaires who are laying off tech people by the 100s of thousands as part of the class war they've been conducting against all working people since the 1940s. They want you to believe that you have to scramble and claw over one another to learn the "AI" that they're forcing onto the world, so that you stop honing the skills that matter (see #1) and are easier to obsolete later. Don't fall for it.
That said, it is helpful to know thy enemy.
> I still think it would be better to put the burden of liability on the users – no matter if they’re private individuals or big companies.
Before seatbelts and other safety equipment was required in cars by law, what you say above was the exact argument used by carmakers against adding safety measures. The responsibility should be put onto the drivers--the users of cars--not the car manufacturers. Many people died needlessly, compared to today. Is this *really* the position you're taking?
> How do you really know if a project has been used in dangerous situations? (If this changes in the future, are programmers that contributed in the past – when this project was not yet used in dangerous situations – also liable?)
Trust me, if people got sued or went to jail, the tech industry would figure out really fast how to make these determinations. The only reason this is puzzling at all is that software development is almost entirely unregulated, and has enjoyed the equivalent of a child's life, without a care in the world.
But really, it's a silly question isn't it? You're supposed to list the licenses of open source software you use in your projects. Devices and systems that have caused harm are documented by the legal system, by regulatory regimes, by people who've been harmed, etc. All the necessary data is there to connect the dots. Those dots aren't usually connected, though, because people pretend that software developers should be free of responsibility.
> How do you really know if a project has been used in dangerous situations? (If this changes in the future, are programmers that contributed in the past – when this project was not yet used in dangerous situations – also liable?)
Trust me, if people got sued or went to jail, the tech industry would figure out really fast how to make these determinations. The only reason this is puzzling at all is that software development is almost entirely unregulated, and has enjoyed the equivalent of a child's life, without a care in the world.
But really, it's a silly question isn't it? You're supposed to list the licenses of open source software yo uuse in your projects. Devices and systems that have caused harm are documented by the legal system, by regulatory regimes, by people who've been harmed, etc. All the necessary data is there to connect the dots. Those dots aren't usually connected, though, because people pretend that software developers should be free of responsibility.
>> Firstly, contributing software to an open source project cannot be a blanket “get out of jail free” card. That’s a sociopathic stance, on its face, and just cannot be accepted.
> I don’t understand. Why is that sociopathic? (Language barrier here? I really don’t get what you mean.)
Imagine an open source software project that is designed, from day 1, to produce software to drive a planet-destroying weapon. The fact that it is an open source project does not allow the software developers involved to freely make the software for the planet-destroying weapon without any responsibility for the consequences of using the weapon. They are directly involved in an activity that will destroy the planet, and they should be treated as such.
That is extreme, obviously, but the point is that there is a line somewhere. A hobby project is obviously not dangerous to anyone. A planet-destroying weapon is. It is sociopathic--literally, deadly to society--to pretend otherwise. I *all other sphere of life*, we are careful to distinguish which behaviors are dangerous from which behaviors are not. Why should open source software development be any different?
It should not be different. Some open source software development is dangerous, and should be treated appropriately.
Firstly, contributing software to an open source project cannot be a blanket "get out of jail free" card. That's a sociopathic stance, on its face, and just cannot be accepted.
Secondly, the fact that software licenses state that the software is provided without warranty/liability is meaningless until those clauses are tested in court cases. If judges say "bullshit" to the "no warranty" clauses, and hold developers accountable anyway, then those clauses become meaningless (at least in the US, where case law and precedent matter).
But thirdly, and most importantly, there is always context that absolutely has to be taken into consideration. Sure, you'd be foolish to jump into a random person's for-rent car thinking it'll be a good ambulance. But if the car has "Ambulance" painted on it, and the driver repeatedly tells you they also drive ambulances for the city hospital, and there's a siren on top, that person can and should be held liable for falsely presenting themselves as an ambulance. Even if they do have a tiny little note somewhere that says "not an actual ambulance".
And the same should happen in software. If people are working on an open source project that has been used in dangerous situations, and they are fully aware that this could happen again, then they absolutely should face liability if their code kills somebody (for instance). We literally do this *in almost every other aspect of life*, so why should software developers be free from all responsibility? Engineers who design buildings have to take out liability insurance because they can be personally sued if their designs cause harm. Doctors take out malpractice insurance in case their advice causes harm. But software developers get to commit all manner of bullshit, and never face any consequences? No way, that's stupid.
Firstly, contributing software to an open source project cannot be a blanket "get out of jail free" card. That's a sociopathic stance, on its face, and just cannot be accepted.
Secondly, the fact that software licenses state that the software is provided without warranty/liability is meaningless until those clauses are tested in court cases. If judges say "bullshit" to the "no warranty" clauses, and hold developers accountable anyway, then those clauses become meaningless (at least in the US, where case law and precedent matter).
But thirdly, and most importantly, there is always context that absolutely has to be taken into consideration. Sure, you'd be foolish to jump into a random person's for-rent car thinking it'll be a good ambulance. But if the car has "Ambulance" painted on it, and the driver repeatedly tells you they also drive ambulances for the city hospital, and there's a siren on top, that person can and should be held liable for falsely presenting themselves as an ambulance. Even if they do have a tiny little note somewhere that says "not an actual ambulance".
And the same should happen in software. If people are working on an open source project that has been used in dangerous situations, and they are fully aware that this could happen again, then they absolutely should face liability if their code kills somebody (for instance). We literally do this *in almost every other aspect of life*, so why should software developers be free from all responsibility? Engineers who design buildings have to take out liability insurance because they can be personally sued if their designs cause harm. Doctors take out malpractice insurance in case their advice causes harm.
Also, open source provides an avenue for companies to launder their own responsibilities. That loophole should be closed.
Anyway, it's not an open and shut caae of "absolutely no liability for open source developers ever." Frankly, software quality would improve tenfold virtually overnight if developers knew they could be sied for doing lousy work. That's not a "chilling effect", it's responsible regulation of potentially dangerous products.
Also, open source provides an avenue for companies to launder their own responsibilities. That loophole should be closed.
Anyway, it's not an open and shut caae of "absolutely no liability for open source developers ever." Frankly, software quality would improve tenfold virtually overnight if developers knew they could be sued for doing lousy work. That's not a "chilling effect", it's responsible regulation of potentially dangerous products.
And after all that, a bunch of fucking R&D scientists throw their shit into ChatGPT and leak it to the entire world. 🤦♂ Like wtf???
And after all that, a bunch of fucking R&D scientists throw their shit into ChatGPT and leak it to the entire world. 🤦♂ Like wtf???
🤦♂ what is the matter with people
I've been avoiding news about the musk swamp, but this one feels like a pretty big deal. I believe NPR is the first significant news organization to leave Twitter. 52 accounts.

nix
are distributed for Linux and MacOS. You're on your own if you want to try it on *BSD, but you can find people who've said they've pulled that off if you search.*
nix
to install stuff I want to play with, and that should work fine for everyone if you're into nix
.
diff
and patch
are straightforward. In pijul, commits *are* patches.
This guy put a prompt into ChatGPT and recorded its output. Then he shuffled the words in the prompt text in such a way that the prompt was completely unintelligible to a human being, but still retained some of the correlational statistics of the words. He fed that as a prompt into ChatGPT. The result? Output that was almost identical to the original output.
Gibberish input into ChatGPT will produce coherent-sounding text just as well as a carefully-crafted prompt. This is such a nice and simple demonstration that ChatGPT has no "intelligence" of any kind built into it.
Like, if you type "The dog is", autocomplete will suggest some words from you that are likely to come next. Maybe "barking", "wet", "hungry", ... It'll rank those by how high a probability it rates each follow-up word. It'll probably not suggest words like "uranium" or "quickly", because you very rarely if ever encounter those words after "The dog is" in English sentences so their probability is very low.
👆 That's the "autoregressive" part.
It gets these probabilities from a "language model", which is a fancy way of saying a table of probabilities. A literal lookup table of the probabilities would be wayyyyy too big to be practical, so neural networks are often used as a representation of the lookup table, and deep learning (many-layered neural networks + a learning algorithm) is the hotness lately so they use that.
👆 That's the "language model" part.
So, you enter a prompt for ChatGPT. It runs fancy autocorrect to pick a word that should come next. It runs fancy autocorrect again to see what word will come next *after the last word it predicted and some of your prompt words*. Repeat to generate as many words as needed. There's probably a heuristic or a special "END OF CHAT" token to indicate when it should stop generating and send its response to you. Uppercase and lowercase versions of the tokens are in there so it can generate those. Punctuation is in there so it can generate that. With a good enough "language model", it'll do nice stuff like close parens and quotes, correctly start a sentence with a capital letter, add paragraph breaks, and so on.
There's really not much more to it than that, aside from a crapton of engineering to make all that work at the scale they're doing it.
The best description I've heard of it is that it's an extremely complicated autocomplete. The way it (most likely) works is that it reads through the sequence of text a user enters (the prompt), and then begins generating the next words that it deems likely to follow the prompt text. Very much how autocomplete on a smartphone keyboard works. It's a generative model, which means the neural network is probably being trained to learn the mean, standard deviation, and possibly other statistics about some probabilistic generative model (undescribed by OpenAI to my knowldge). There were some advances in LSTM around the time GPT was becoming popular, so it's possible they use a variant of that.
Hope that suffices for now!
The best description I've heard of it is that it's an extremely complicated autocomplete. The way it (most likely) works is that it reads through the sequence of text a user enters (the prompt), and then begins generating the next words that it deems likely to follow the prompt text. Very much how autocomplete on a smartphone keyboard works. It's a generative model, which means the neural network is probably being trained to learn the mean, standard deviation, and possibly other statistics about some probabilistic generative model (undescribed by OpenAI to my knowldge). There were some advances in LSTM around the time GPT was becoming popular, so it's possible they use a variant of that.
Hope that suffices for now!
The best description I've heard of it is that it's an extremely complicated autocomplete. The way it (most likely) works is that it reads through the sequence of text a user enters (the prompt), and then begins generating the next words that it deems likely to follow the prompt text. Very much how autocomplete on a smartphone keyboard works. It's a generative model, which means the neural network is probably being trained to learn the mean, standard deviation, and possibly other statistics about some probabilistic generative model (undescribed by OpenAI to my knowldge). There were some advances in LSTM around the time GPT was becoming popular, so it's possible they use a variant of that.
The best description I've heard of it is that it's an extremely complicated autocomplete. The way it (most likely) works is that it reads through the sequence of text a user enters (the prompt), and then begins generating the next words that it deems likely to follow the prompt text. Very much how autocomplete on a smartphone keyboard works. It's a generative model, which means the neural network is probably being trained to learn the mean, standard deviation, and possibly other statistics about some probabilistic generative model (undescribed by OpenAI to my knowldge). There were some advances in LSTM around the time GPT was becoming popular, so it's possible they use a variant of that.
Hope that suffices for now!
In case you didn't realize what a steaming bag of garbage Elon Musk is, and how he turns everything he touches into a pile of same, this report details how Tesla cars record video both *inside* and outside of the cars and upload that to Tesla Inc. Workers there watched the videos and ridiculed their customers.
1. Chris Murphy, a Senator from Connecticut in the US, tweeted uninformed alarmism about AI. Mitchell responded to him on Twitter, and he had a temper tantrum about it that was noticed by major media outlets. Not looking good for our politicians taking an informed stance on this stuff
2. She also reacted to the bullshit "letter" that the Future of Life Institute put about calling for a "pause" on AI research. She basically concludes it cannot be taken at face value
3. Time Magazine, a major publication in the US, published an opinion piece by a well-known kook in the AI world, a guy who has been claiming for decades that AI is going to kill us all. She calls out Time for being irresponsible like that
4. She gives a few thoughts one what she thinks we should *actually* worry about, as realistic people who aren't mind-poisoned by the AI hype. Amplifying bias, and amplifying misinformation and disinformation
Running through the post is a theme that "Artificial General Intelligence" is a misleading, bad term and is serving the interest of a lot of bad actors but misinforming the public.
Read the post though, it's chock full of countless useful insights, facts and asides that this summary can't possibly do justice to.
1. Chris Murphy, a Senator from Connecticut in the US, posted uninformed alarmism about AI. Mitchell responded to him on Twitter, and he had a temper tantrum about it that was noticed by major media outlets. Not looking good for our politicians taking an informed stance on this stuff
2. She also reacted to the bullshit "letter" that the Future of Life Institute put about calling for a "pause" on AI research. She basically concludes it cannot be taken at face value
3. Time Magazine, a major publication in the US, published an opinion piece by a well-known kook in the AI world, a guy who has been claiming for decades that AI is going to kill us all. She calls out Time for being irresponsible like that
4. She gives a few thoughts one what she thinks we should *actually* worry about, as realistic people who aren't mind-poisoned by the AI hype. Amplifying bias, and amplifying misinformation and disinformation
Running through the post is a theme that "Artificial General Intelligence" is a misleading, bad term and is serving the interest of a lot of bad actors but misinforming the public.
Read the post though, it's chock full of countless useful insights, facts and asides that this summary can't possibly do justice to.
1. Chris Murphy, a Senator from Connecticut in the US, tweeted uninformed alarmism about AI. Mitchell responded to him on Twitter, and he had a temper tantrum about it that was noticed by major media outlets. Not looking good for our politicians taking an informed stance on this stuff
2. She also reacted to the bullshit "letter" that the Future of Life Institute put out calling for a "pause" on AI research. She basically concludes it cannot be taken at face value
3. Time Magazine, a major publication in the US, published an opinion piece by a well-known kook in the AI world, a guy who has been claiming for decades that AI is going to kill us all. She calls out Time for being irresponsible like that
4. She gives a few thoughts on what she thinks we should *actually* worry about, as realistic people who aren't mind-poisoned by the AI hype. Amplifying bias, and amplifying misinformation and disinformation
Running through the post is a theme that "Artificial General Intelligence" is a misleading, bad term and is serving the interest of a lot of bad actors but misinforming the public.
Read the post though, it's chock full of countless useful insights, facts and asides that this summary can't possibly do justice to.
1. Chris Murphy, a Senator from Connecticut in the US, tweeted uninformed alarmism about AI. Mitchell responded to him on Twitter, and he had a temper tantrum about it that was noticed by major media outlets. Not looking good for our politicians taking an informed stance on this stuff
2. She also reacted to the bullshit "letter" that the Future of Life Institute put out calling for a "pause" on AI research. She basically concludes it cannot be taken at face value
3. Time Magazine, a major publication in the US, published an opinion piece by a well-known kook in the AI world, a guy who has been claiming for decades that AI is going to kill us all. She calls out Time for being irresponsible like that
4. She gives a few thoughts one what she thinks we should *actually* worry about, as realistic people who aren't mind-poisoned by the AI hype. Amplifying bias, and amplifying misinformation and disinformation
Running through the post is a theme that "Artificial General Intelligence" is a misleading, bad term and is serving the interest of a lot of bad actors but misinforming the public.
Read the post though, it's chock full of countless useful insights, facts and asides that this summary can't possibly do justice to.
An excellent podcast episode dissecting the hype around ChatGPT and helping to ground us in a more realistic understand what of it is and isn't.
I'm a big fan of Melanie Mitchell, and this run-down of hers of the recent AI hypefest is refreshingly clear-headed as usual.