# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 2032
# self = https://watcher.sour.is?uri=https://anthony.buc.ci/user/abucci/twtxt.txt&offset=1732
# next = https://watcher.sour.is?uri=https://anthony.buc.ci/user/abucci/twtxt.txt&offset=1832
# prev = https://watcher.sour.is?uri=https://anthony.buc.ci/user/abucci/twtxt.txt&offset=1632
@prologic yeah man, of course!
@prologic https://twitter.com/ardenthistorian/status/1625653951776292864?lang=en
@prologic When you unpack what he's saying in that video (which I've watched, and just now re-watched), and strip away all his attempts to wrap this idea in fancy-sound language, he is saying: it would be better if women were viewed as property of men, because then if they were raped, the men who owned them would get mad and do something about it. Because rape would be a property crime then, like trespassing or theft. Left unspoken by him, but very much known to him, is that the man/men who "own" a woman can then have their way with her, just like they can freely walk around their yard or use their own stuff. In his envisioned better world, it'd be impossible for a husband to rape his wife, for instance, because she is his property and he can do almost anything he wants (that's literally what "property" is in Western countries).

It's so fucked up it's hard to put into words how fucked up it is. And this isn't the only bad idea who bangs on about!
@prologic Maybe so, but that's not because of the people who are objecting to Jordan Peterson, that's for sure. You really need to read the articles I've posted before going there. Really.
@prologic It went there because you are supporting bad people who themselves operate at the level of outrage. You cannot have a "debate" about the ideas of someone like Peterson or Shapiro, *because those ideas should not be considered debate-worthy*. Rape is not OK, period, the end. It is not up for debate or discussion. Yet Peterson acts as if it is. That is abhorrent, and unacceptable in 2023.
@prologic Because they are rightwing assholes with a huge platform and they are literally *HURTING PEOPLE*. People get attacked because of things people like Shapiro and Peterson say. This is not just idle chitchat over coffee. They are saying things like it's OK to rape women (and NO I am not going to dig out the videos where they say that --that's up to YOU to do, do your own homework before defending these ghouls).
@prologic
> Taking Jordan Peterson asn an example, the only thing he “preaches” (if you want to call it that) is to be honest with yourself and to take responsibility.

This is simply untrue. Read the articles I posted, seriously.

In a tweet in one of the articles I posted, Peterson states there is no white supremacy in Canada. This is blatantly false. It is disinformation. Peterson has made statements that rape is OK (he uses "fancy" language like "women should be naturally converted into mothers" but unpack that a bit--what he means is legalized rape followed by forced conception). He is openly anti-LGBTQ and refuses to use peoples' preferred pronouns. He seems to believe that women who wear makeup at work are asking to be sexually harassed.

He's using his platform in academia to pretend that straight, white men are somehow the most aggrieved group in the world and everyone else is just whining and can get fucked. The patron saint of Men's Rights Activists and incels. I find him odious.
@prologic nah, not inclined to do that. The articles suffice--have a read of those when you get the chance.
@prologic I've read half, skimmed the others. Mostly I was going for scale--look at all those headlines. These are horrible people who say horrible things on a regular basis.
@prologic omg yes! They are both ultra-right-wing assholes! The worst of the worst! Please tell me you don't listen to these guys' brain poison?
@prologic
- 12 Reasons Why No One Should Ever Listen to Jordan Peterson Ever Again
- Why Jordan Peterson Is Always Wrong
- Here's why Jordan Peterson is the f*cking worst.: "his ideology quickly morphed into one that reinforces hatred, discrimination, and the oppression of marginalized groups"

- ANGRY WHITE MEN MAR. 30, 2016
A History of Piers Morgan’s Terrible Opinions
- Shut Up, Piers
- Piers Morgan Is Now an Asshole of Record-Breaking Proportions

You're posting Piers Morgan/Jordan Peterson videos lmao???
The Internet Isn't Meant To Be So Small | Defector

> It's annoying to see millions of dollars thrown at making more-or-less literal dupes of internet
companies that everyone is already using begrudgingly and with diminishing emotional returns. It's maybe more frustrating to realize that the goals of these companies is the same as their predecessors, which is to
make the internet smaller.
I have no interest in doing anything about it, even if I had the time (which I don't), but these kind of thing happen all day every day to countless people. My silly blog post isn't worth getting up in arms about, but there are artists and other creators who pour countless hours, heart and soul into their work, only to have it taken in exactly this way. That's one of the reasons I'm so extremely negative about the spate of "AI" tools that have popped up recently. They are powered by theft.
There's a link to the blog post, but they extracted a summary in hopes of keeping people in Google properties (something they've been called out on many times).
There's a link to the blog post, but they extracted a summary in hopes of keeping people in Google properties (something they've been called out on many times).

I was never contacted to ask if I was OK with Google extracting a summary of my blog post and sticking it on the web site. There is a very clear copyright designation at the bottom of each page, including that one. So, by putting their own brand over my text, they violated my copyright. Straightforward theft right there.
Looks like Google's using this blog post of mine without my permission. I hate this kind of tech company crap so much.

Do they legitimately believe that end users will encounter videos of gruesome murders live streams of school shootings, etc etc etc, and be like "oh, tee hee hee, that's not what I want to see! I'd better block that!" and go about their business as usual?

No, they can't possibly be that foolish. They are going to be doing some amount of content moderation. Just not of Nazis, fascists, or far right reactionaries. Which to me means they want that content on there.
Do they legitimately believe that end users will encounter videos of gruesome murders, live streams of school shootings, etc etc etc, and be like "oh, tee hee hee, that's not what I want to see! I'd better block that!" and go about their business as usual?

No, they can't possibly be that foolish. They are going to be doing some amount of content moderation. Just not of Nazis, fascists, or far right reactionaries. Which to me means they want that content on there.
@prologic I know very little about it, but speaking secondhand, it looks like there's a single centralized server now and they're still building the ability to federate? Like, the current alpha they're running is not field testing federation, which makes me think that's not a top priority for them.
I've seen BlueSky referred to as BS, which seems apt.

CEO is a cryptocurrency fool, as is Jack Dorsey, so I don't expect much from it. Then again I'm old and refuse to join any new hotness so take my curmudgeonly take with a grain of salt.

I read somewhere or another that the "decentralization" is only going to be there so that they can push content moderation onto users. They will happily welcome Nazis and fascists, leaving it up to end users to block those instances.

I wonder how they plan to handle the 4chan-level stuff, since that will surely come.
I've seen BlueSky referred to as BS, which seems apt.

CEO is a cryptocurrency fool, as is Jack Dorsey, so I don't expect much from it. Then again I'm old and refuse to join any new hotness so take my curmudgeonly opinions with a grain of salt.

I read somewhere or another that the "decentralization" is only going to be there so that they can push content moderation onto users. They will happily welcome Nazis and fascists, leaving it up to end users to block those instances.

I wonder how they plan to handle the 4chan-level stuff, since that will surely come.
I've seen BlueSky referred to as BS (as in Blue Sky, but you know...), which seems apt.

CEO is a cryptocurrency fool, as is Jack Dorsey, so I don't expect much from it. Then again I'm old and refuse to join any new hotness so take my curmudgeonly opinions with a grain of salt.

I read somewhere or another that the "decentralization" is only going to be there so that they can push content moderation onto users. They will happily welcome Nazis and fascists, leaving it up to end users to block those instances.

I wonder how they plan to handle the 4chan-level stuff, since that will surely come.
@Phys_org using the phrase "machine learning" in this article is misleading and bandwagoning. They used a neural model, which neuroscientists were doing long before "machine learning" became a popular term.
@prologic yes, I agree. It's bizarre to me that people use the thing at all let alone pay for it.
I get that there are groups of people who don't have many good options besides Bluesky, so moistly this is griping about how bad social media is generally, and how the lousy people in charge continue to be in charge.
BlueSky is cosplaying decentralization

> I say “ostensibly decentralized”, because BlueSky’s (henceforth referred to as “BS” here) decentralization is a similar kind of decentralization as with cryptocurrencies: sure, you can run your own node (in BS case: “personal data servers”), but that does not give you basically any meaningful agency in the system.

I don't know why anyone would want to use this crap. It's the same old same old and it'll end up the same old way.
@movq
There is a "right" way to make something like GitHub CoPilot, but Microsoft did not choose that way. They chose one of the most exploitative options available to them. For that reason, I hope they face significant consequences, though I doubt they will in the current climate. I also hope that CoPilot is shut down, though I'm pretty certain it will not be.

Other than access to the data behind it, Microsoft has nothing special that allows it to create something like CoPilot. The technology behind it has been around for at least a decade. There could be a "public" version of this same tool made by a cooperating group of people volunteering, "leasing", or selling their source code into it. There could likewise be an ethically-created corporate version. Such a thing would give individual developers or organizations the choice to include their code in the tool, possibly for a fee if that's something they want or require. The creators of the tool would have to acknowledge that they have suppliers--the people who create the code that makes their tool possible--instead of simply stealing what they need and pretending that's fine.

This era we're living through, with large companies stomping over all laws and regulations, blatantly stealing other people's work for their own profit, cannot come to an end soon enough. It is destroying innovation, and we all suffer for that. Having one nifty tool like CoPilot that gives a bit of convenience is nowhere near worth the tremendous loss that Microsoft's actions in this instace are creating for everyone.
@carsten That's a dissembling answer from him. Github is owned by Microsoft, and CoPilot is a for-pay product. It would have no value, and no one would pay for it, were it not filled with code snippets that no one consented to giving to Microsoft for this purpose. Microsoft will pay $0 to the people who wrote the code that makes CoPilot valuable to them.

In short, it's a gigantic resource-grab. They're greedy assholes taking advantage of the hard work of millions of people without giving a single cent back to any of them. I hope they're sued so often that this product is destroyed.
@thecanine wow this is horrifying. What happened to Opera? It used to be my favorite browser but now they're like that one cousin who started getting into drugs, and then got in trouble with the law, and then before you know it they're scamming old ladies out of their pension money.
@darch Made up is not the same as lie. That's obvious isn't it?!?!
@darch So a fiction novel, which is labelled "fiction", is a lie? I still don't understand. The word "lie" entails an intention to deceive, but fiction writing does not intend to deceive.
@carsten You are conflating "aiming your eyes at" with "viewing art". These are fundamentally different activities.
@carsten Animals have inner lives. Computers do not.

Are you really so desperate to make this point thst you're citing _Quora_??? Believe what you want to believe.
@darch What do you mean when you say that art is a lie?
@prologic @carsten
> There is (I assure you there will be, don’t know what it is yet…) a price to be paid for this convenience.

Exactly prologic, and that's why I'm negative about these sorts of things. I'm almost 50, I've been around this tech hype cycle a bunch of times. Look at what happened with Facebook. When it first appeared, people loved it and signed up and shared incredibly detailed information about themselves on it. Facebook made it very easy and convenient for almost anyone, even people who had limited understanding of the internet or computers, to get connected with their friends and family. And now here we are today, where 80% of people in surveys say they don't trust Facebook with their private data, where they think Facebook commits crimes and should be broken up or at least taken to task in a big way, etc etc etc. Facebook has been fined many billions of dollars and faces endless federal lawsuits in the US alone for its horrible practices. Yet Facebook is still exploitative. It's a societal cancer.

All signs suggest this generative AI stuff is going to go exactly the same way. That is the inevitable course of these things in the present climate, because the tech sector is largely run by sociopathic billionaires, because the tech sector is not regulated in any meaningful way, and because the tech press / tech media has no scruples. Some new tech thing generates hype, people get excited and sign up to use it, then when the people who own the tech think they have a critical mass of users, they clamp everything down and start doing whatever it is they wanted to do from the start. They'll break laws, steal your shit, cause mass suffering, who knows what. They won't stop until they are stopped by mass protest from us, and the government action that follows.

That's a huge price to pay for a little bit of convenience, a price we pay and continue to pay for decades. We all know better by now. Why do we keep doing this to ourselves? It doesn't make sense. It's insane.
@carsten
> I have to write so many emails to so many idiots who have no idea what they are doing

So it sounds to me like the pressure is to reduce how much time you waste on idiots, which to my mind is a very good reason to use a text generator! I guess in that case you don't mind too much whether the company making the AI owns your prompt text?

I'd really like to see tools like this that you can run on your desktop or phone, so they don't send your hard work off to someone else and give a company a chance to take it from you.
@prologic @carsten

(1) You go to the store and buy a microwave pizza. You go home, put it in the microwave, heat it up. Maybe it's not quite the way you like it, so you put some red pepper on it, maybe some oregano.

Are you a pizza chef? No. Do we know what your cooking is like? Also no.

(2) You create a prompt for StableDiffusion to make a picture of an elephant. What pops out isn't quite to your liking. You adjust the prompt, tweak it a bunch, till the elephant looks pretty cool.

Are you an artist? No. Do we know what your art is like? Also no.

The elephant is "fake art" in a similar sense to how a microwave pizza is "fake pizza". That's what I meant by that word. The microwave pizza is a sort of "simulation of pizza", in this sense. The generated elephant picture is a simulation of art, in a similar sense, though it's even worse than that and is probably more of a simulacrum of art since you can't "consume" an AI-generated image the way you "consume" art.
@carsten @lyse I also think it is best called fake. Art is created by human beings, for human beings. It mediates a relationship between two people, and is a means of expression.

A computer has no inner life, no feelings, no experience of the world. It is not sentient. It has no life. There's nothing "in" there for it to express. It's just generating pixels in patterns we've learned to recognize. These AI technologies are carefully crafted to fool people into experiencing the things they experience when they look at human-made art, but it is an empty experience.
@carsten I also think it is best called fake. Art is created by human beings, for human beings. It mediates a relationship between two people, and is a means of expression.

A computer has no inner life, no feelings, no experience of the world. It is not sentient. It has no life. There's nothing "in" there for it to express. It's just generating pixels in patterns we've learned to recognize. These AI technologies are carefully crafted to fool people into experiencing the things they experience when they look at human-made art, but it is an empty experience.
@carsten Who says you need to use anything like that? Where's the pressure coming from?
@carsten yeesh, it's a for-pay company I wouldn't give them the output of your mind for free and train their AI for them.
@xuu this is alarmingly catchy
@xuu everyone's moving to gated communities!
@prologic ack, I didn't see this before. Get well soon!
ChatGPT and Elasticsearch: OpenAI meets private data | Elastic Blog

Terrifying. Elasticsearch is celebrating that they're going to send your private data to OpenAI? No way.
@prologic I'm a bit of a GPU junkie (😳) and I have 3, 2018-era GPUs lying around. One of these days when I have Free Time™ I'll put those together into some kind of cluster....
@prologic I'm a bit of a GPU junkie (😳) and I have 3, 2019-era GPUs lying around. One of these days when I have Free Time™ I'll put those together into some kind of cluster....
@darch yes!
@prologic yeah. I'd add "Big Data" to that hype list, and I'm sure there are a bunch more that I'm forgetting.

On the topic of a GPU cluster, the optimal design is going to depend a lot on what workloads you intend to run on it. The weakest link in these things is the data transfer rate, but that won't matter too much for compute-heavy workloads. If your workloads are going to involve a lot of data, though, you'd be better off with a smaller number of high-VRAM cards than with a larger number of interconnected cards. I guess that's hardware engineering 101 stuff, but still...
@prologic I would politely suggest again that we not react to people with bad attitudes who talk shit about yarn. If twt is forked, it should be forked to add features that are otherwise not possible. Not to appease people who will probably never be appeased.
On LinkedIn I see a lot of posts aimed at software developers along the lines of "If you're not using these AI tools (X,Y,Z) you're going to be left behind."

Two things about that:
1. No you're not. If you have good soft skills (good communication, show up on time, general time management) then you're already in excellent shape. No AI can do that stuff, and for that alone no AI can replace people
2. This rhetoric is coming directly from the billionaires who are laying off tech people by the 100s of thousands as part of the class war they've been conducting against all working people since the 1940s. They want you to believe that you have to scramble and claw over one another to learn the "AI" that they're forcing onto the world, so that you stop honing the skills that matter (see #1) and are easier to obsolete later. Don't fall for it. It's far from clear how this will shake out once governments get off their asses and start regulating this stuff, by the way--most of these "AI" tools are blatantly breaking copyright and other IP laws, and some day that'll catch up with them.

That said, it is helpful to know thy enemy.
On LinkedIn I see a lot of posts aimed at software developers along the lines of "If you're not using these AI tools (X,Y,Z) you're going to be left behind."

Two things about that:
1. No you're not. If you have good soft skills (good communication, show up on time, general time management) then you're already in excellent shape
2. This rhetoric is coming directly from the billionaires who are laying off tech people by the 100s of thousands as part of the class war they've been conducting against all working people since the 1940s. They want you to believe that you have to scramble and claw over one another to learn the "AI" that they're forcing onto the world, so that you stop honing the skills that matter (see #1) and are easier to obsolete later. Don't fall for it.

That said, it is helpful to know thy enemy.
On LinkedIn I see a lot of posts aimed at software developers along the lines of "If you're not using these AI tools (X,Y,Z) you're going to be left behind."

Two things about that:
1. No you're not. If you have good soft skills (good communication, show up on time, general time management) then you're already in excellent shape. No AI can do that stuff, and for that alone no AI can replace people
2. This rhetoric is coming directly from the billionaires who are laying off tech people by the 100s of thousands as part of the class war they've been conducting against all working people since the 1940s. They want you to believe that you have to scramble and claw over one another to learn the "AI" that they're forcing onto the world, so that you stop honing the skills that matter (see #1) and are easier to obsolete later. Don't fall for it.

That said, it is helpful to know thy enemy.
@movq Cheers! I'm happy to agree to disagree too of course! Thanks for engaging!
@xuu That has no relevance to the point!
@movq
> I still think it would be better to put the burden of liability on the users – no matter if they’re private individuals or big companies.

Before seatbelts and other safety equipment was required in cars by law, what you say above was the exact argument used by carmakers against adding safety measures. The responsibility should be put onto the drivers--the users of cars--not the car manufacturers. Many people died needlessly, compared to today. Is this *really* the position you're taking?
@movq
> How do you really know if a project has been used in dangerous situations? (If this changes in the future, are programmers that contributed in the past – when this project was not yet used in dangerous situations – also liable?)

Trust me, if people got sued or went to jail, the tech industry would figure out really fast how to make these determinations. The only reason this is puzzling at all is that software development is almost entirely unregulated, and has enjoyed the equivalent of a child's life, without a care in the world.

But really, it's a silly question isn't it? You're supposed to list the licenses of open source software you use in your projects. Devices and systems that have caused harm are documented by the legal system, by regulatory regimes, by people who've been harmed, etc. All the necessary data is there to connect the dots. Those dots aren't usually connected, though, because people pretend that software developers should be free of responsibility.
@movq
> How do you really know if a project has been used in dangerous situations? (If this changes in the future, are programmers that contributed in the past – when this project was not yet used in dangerous situations – also liable?)

Trust me, if people got sued or went to jail, the tech industry would figure out really fast how to make these determinations. The only reason this is puzzling at all is that software development is almost entirely unregulated, and has enjoyed the equivalent of a child's life, without a care in the world.

But really, it's a silly question isn't it? You're supposed to list the licenses of open source software yo uuse in your projects. Devices and systems that have caused harm are documented by the legal system, by regulatory regimes, by people who've been harmed, etc. All the necessary data is there to connect the dots. Those dots aren't usually connected, though, because people pretend that software developers should be free of responsibility.
@movq
>> Firstly, contributing software to an open source project cannot be a blanket “get out of jail free” card. That’s a sociopathic stance, on its face, and just cannot be accepted.

> I don’t understand. Why is that sociopathic? (Language barrier here? I really don’t get what you mean.)

Imagine an open source software project that is designed, from day 1, to produce software to drive a planet-destroying weapon. The fact that it is an open source project does not allow the software developers involved to freely make the software for the planet-destroying weapon without any responsibility for the consequences of using the weapon. They are directly involved in an activity that will destroy the planet, and they should be treated as such.

That is extreme, obviously, but the point is that there is a line somewhere. A hobby project is obviously not dangerous to anyone. A planet-destroying weapon is. It is sociopathic--literally, deadly to society--to pretend otherwise. I *all other sphere of life*, we are careful to distinguish which behaviors are dangerous from which behaviors are not. Why should open source software development be any different?

It should not be different. Some open source software development is dangerous, and should be treated appropriately.
@movq I respectfully disagree. I think the broad point you make makes sense, but there are details that matter.

Firstly, contributing software to an open source project cannot be a blanket "get out of jail free" card. That's a sociopathic stance, on its face, and just cannot be accepted.

Secondly, the fact that software licenses state that the software is provided without warranty/liability is meaningless until those clauses are tested in court cases. If judges say "bullshit" to the "no warranty" clauses, and hold developers accountable anyway, then those clauses become meaningless (at least in the US, where case law and precedent matter).

But thirdly, and most importantly, there is always context that absolutely has to be taken into consideration. Sure, you'd be foolish to jump into a random person's for-rent car thinking it'll be a good ambulance. But if the car has "Ambulance" painted on it, and the driver repeatedly tells you they also drive ambulances for the city hospital, and there's a siren on top, that person can and should be held liable for falsely presenting themselves as an ambulance. Even if they do have a tiny little note somewhere that says "not an actual ambulance".

And the same should happen in software. If people are working on an open source project that has been used in dangerous situations, and they are fully aware that this could happen again, then they absolutely should face liability if their code kills somebody (for instance). We literally do this *in almost every other aspect of life*, so why should software developers be free from all responsibility? Engineers who design buildings have to take out liability insurance because they can be personally sued if their designs cause harm. Doctors take out malpractice insurance in case their advice causes harm. But software developers get to commit all manner of bullshit, and never face any consequences? No way, that's stupid.
@movq I respectfully disagree. I think the broad point you make makes sense, but there are details that matter.

Firstly, contributing software to an open source project cannot be a blanket "get out of jail free" card. That's a sociopathic stance, on its face, and just cannot be accepted.

Secondly, the fact that software licenses state that the software is provided without warranty/liability is meaningless until those clauses are tested in court cases. If judges say "bullshit" to the "no warranty" clauses, and hold developers accountable anyway, then those clauses become meaningless (at least in the US, where case law and precedent matter).

But thirdly, and most importantly, there is always context that absolutely has to be taken into consideration. Sure, you'd be foolish to jump into a random person's for-rent car thinking it'll be a good ambulance. But if the car has "Ambulance" painted on it, and the driver repeatedly tells you they also drive ambulances for the city hospital, and there's a siren on top, that person can and should be held liable for falsely presenting themselves as an ambulance. Even if they do have a tiny little note somewhere that says "not an actual ambulance".

And the same should happen in software. If people are working on an open source project that has been used in dangerous situations, and they are fully aware that this could happen again, then they absolutely should face liability if their code kills somebody (for instance). We literally do this *in almost every other aspect of life*, so why should software developers be free from all responsibility? Engineers who design buildings have to take out liability insurance because they can be personally sued if their designs cause harm. Doctors take out malpractice insurance in case their advice causes harm.
@marado @prologic personally I think there are good arguments in favor of accountability standards for some open source projects. Not all, obviously. But it is insane to act as though open source contributors bear exactly 0 responsibility in cases where they know full well that they are contributing code to potentially dangerous projects, and/or know they will profit from those contributions. We don't do that in any other sphere of life and shouldn't be doing it with software either. People die from this shit, or lose their life savings.

Also, open source provides an avenue for companies to launder their own responsibilities. That loophole should be closed.

Anyway, it's not an open and shut caae of "absolutely no liability for open source developers ever." Frankly, software quality would improve tenfold virtually overnight if developers knew they could be sied for doing lousy work. That's not a "chilling effect", it's responsible regulation of potentially dangerous products.
@marado @prologic personally I think there are good arguments in favor of accountability standards for some open source projects. Not all, obviously. But it is insane to act as though open source contributors bear exactly 0 responsibility in cases where they know full well that they are contributing code to potentially dangerous projects, and/or know they will profit from those contributions. We don't do that in any other sphere of life and shouldn't be doing it with software either. People die from this shit, or lose their life savings.

Also, open source provides an avenue for companies to launder their own responsibilities. That loophole should be closed.

Anyway, it's not an open and shut caae of "absolutely no liability for open source developers ever." Frankly, software quality would improve tenfold virtually overnight if developers knew they could be sued for doing lousy work. That's not a "chilling effect", it's responsible regulation of potentially dangerous products.
@prologic You know, my startup explored a similar space. I worked on a large language model, but we trained it on and applied it to technical text like patents and academic articles only. We, mostly on my urging, took information security extremely seriously. We were working on SOC 2 certification for our data center, we had a very strict, container-level partitioning between customers (no multi-tenant databases hosting multiple customers; each customer's stuff lived in its own set of containers) etc etc etc. To the best of our knowledge and ability, we followed industry best practice, so that we could tell potential corporate customers that we took the security of their R&D data very seriously and could back that claim with facts.

And after all that, a bunch of fucking R&D scientists throw their shit into ChatGPT and leak it to the entire world. 🤦‍♂ Like wtf???
@prologic You know, my startup explored a similar space. I worked on a large language model, but we trained it on and applied it to technical text like patents and academic articles only. We, mostly on my urging, took information security extremely seriously. We were working on SOC 2 certification for our data center, we had a very strict, container-level partitioning between customers (no multi-tenant databases hosting multiple customers; each customer's stuff lived in its own set of containers) etc etc etc. To the best of our knowledge and ability, we followed industry best practice, so that we could tell potential corporate customers that we took the security of their R&D data very seriously and could back that claim with facts. I'm sure you know that all that is very slow, painstaking, and expensive work.

And after all that, a bunch of fucking R&D scientists throw their shit into ChatGPT and leak it to the entire world. 🤦‍♂ Like wtf???
Samsung Employees Use ChatGPT at Work, Unknowingly Leak Critical Source Codes | Tech Times

🤦‍♂ what is the matter with people
NPR quits Twitter after being falsely labeled as 'state-affiliated media' : NPR

I've been avoiding news about the musk swamp, but this one feels like a pretty big deal. I believe NPR is the first significant news organization to leave Twitter. 52 accounts.
@prologic when I first saw the picture of it on the web page I thought all those buttons were dials! Reminded me of the Mark I!

@prologic @adi I come from an academic background, and in that realm CV, which is short for "curriculum vitae" (or "course of your life", roughly) is usually a long-form account of everything notable you've done in your career. A resume is a short-form summary, often targeted at a specific employer. My CV is 8 pages long and I haven't done all that much. I have a 1-page and a 2-page resume I adapt when I need a resume for something.
@adi binary versions of nix are distributed for Linux and MacOS. You're on your own if you want to try it on *BSD, but you can find people who've said they've pulled that off if you search.*
@adi hmm, not sure what to tell you. That command works fine for me, which is unhelpful for you! I use nix to install stuff I want to play with, and that should work fine for everyone if you're into nix.
@adi I mean, you can read their docs to learn about the file format, but diff and patch are straightforward. In pijul, commits *are* patches.
@adi Check out Pijul, which has basically taken up this idea and run a marathon with it.
In more interesting news, I've probably posted before that my cat uses the printer as a cat bed. What I don't think I've posted before is that she frequently prints test pages when she shifts around. At first it scared the crap out of her, but now she paws at the pages, a little annoyed, till they fall on the floor or till she can make them into a bed again.
Stochastic Parrots Might Even Be More Linguistically Competent Than ChatGPT

This guy put a prompt into ChatGPT and recorded its output. Then he shuffled the words in the prompt text in such a way that the prompt was completely unintelligible to a human being, but still retained some of the correlational statistics of the words. He fed that as a prompt into ChatGPT. The result? Output that was almost identical to the original output.

Gibberish input into ChatGPT will produce coherent-sounding text just as well as a carefully-crafted prompt. This is such a nice and simple demonstration that ChatGPT has no "intelligence" of any kind built into it.
@prologic oh, nice. Did you stumble on recurrent neural networks?
(not to imply the engineering parts, including the data acquisition and cleanup, are easy, and not to imply there aren't a million tricks in there to make sure this all works nicely. it's a hell of a feat of engineering and those two twts I wrote only outline at a very high level one way it might work).
@prologic geez, yes that's horrible. "Autoregressive" just means that the next token in a sequence is a function of previous ones, and "language model" here just means a probability distribution over sequences. "Autoregressive language model" is an infuriatingly obtuse way to describe autocomplete!

Like, if you type "The dog is", autocomplete will suggest some words from you that are likely to come next. Maybe "barking", "wet", "hungry", ... It'll rank those by how high a probability it rates each follow-up word. It'll probably not suggest words like "uranium" or "quickly", because you very rarely if ever encounter those words after "The dog is" in English sentences so their probability is very low.

👆 That's the "autoregressive" part.

It gets these probabilities from a "language model", which is a fancy way of saying a table of probabilities. A literal lookup table of the probabilities would be wayyyyy too big to be practical, so neural networks are often used as a representation of the lookup table, and deep learning (many-layered neural networks + a learning algorithm) is the hotness lately so they use that.

👆 That's the "language model" part.

So, you enter a prompt for ChatGPT. It runs fancy autocorrect to pick a word that should come next. It runs fancy autocorrect again to see what word will come next *after the last word it predicted and some of your prompt words*. Repeat to generate as many words as needed. There's probably a heuristic or a special "END OF CHAT" token to indicate when it should stop generating and send its response to you. Uppercase and lowercase versions of the tokens are in there so it can generate those. Punctuation is in there so it can generate that. With a good enough "language model", it'll do nice stuff like close parens and quotes, correctly start a sentence with a capital letter, add paragraph breaks, and so on.

There's really not much more to it than that, aside from a crapton of engineering to make all that work at the scale they're doing it.
@prologic little bit of code there huh?
@prologic Dude that is a big ask. I'm not sure I could describe the structure faithfully even if I had the time to do that, because OpenAI sucks and won't publish the structure. Here's the original GPT-3 paper: https://arxiv.org/pdf/2005.14165.pdf . You'll see it's scant on details, which is one of the million criticisms of OpenAI: they are not conducting science here, even though they pretend to be. It's some weird combination of marketing and big dick contest with them. The first 10-ish pages give a detail-free description of the neural network, and the remaining 65 pages are bragging about how great they are. I've never seen anything quite like it in all the tens of thousands of research articles I've read over the course of my career.

The best description I've heard of it is that it's an extremely complicated autocomplete. The way it (most likely) works is that it reads through the sequence of text a user enters (the prompt), and then begins generating the next words that it deems likely to follow the prompt text. Very much how autocomplete on a smartphone keyboard works. It's a generative model, which means the neural network is probably being trained to learn the mean, standard deviation, and possibly other statistics about some probabilistic generative model (undescribed by OpenAI to my knowldge). There were some advances in LSTM around the time GPT was becoming popular, so it's possible they use a variant of that.

Hope that suffices for now!
@prologic That is a big ask. I'm not sure I could describe the structure faithfully even if I had the time to do that, because OpenAI sucks and won't publish the structure. Here's the original GPT-3 paper: https://arxiv.org/pdf/2005.14165.pdf . You'll see it's scant on details, which is one of the million criticisms of OpenAI: they are not conducting science here, even though they pretend to be. It's some weird combination of marketing and big dick contest with them. The first 10-ish pages give a detail-free description of the neural network, and the remaining 65 pages are bragging about how great they are. I've never seen anything quite like it in all the tens of thousands of research articles I've encountered over the course of my career.

The best description I've heard of it is that it's an extremely complicated autocomplete. The way it (most likely) works is that it reads through the sequence of text a user enters (the prompt), and then begins generating the next words that it deems likely to follow the prompt text. Very much how autocomplete on a smartphone keyboard works. It's a generative model, which means the neural network is probably being trained to learn the mean, standard deviation, and possibly other statistics about some probabilistic generative model (undescribed by OpenAI to my knowldge). There were some advances in LSTM around the time GPT was becoming popular, so it's possible they use a variant of that.

Hope that suffices for now!
@prologic Dude that is a big ask. I'm not sure I could describe the structure faithfully even if I had the time to do that, because OpenAI sucks and won't publish the structure. Here's the original GPT-3 paper: https://arxiv.org/pdf/2005.14165.pdf . You'll see it's scant on details, which is one of the million criticisms of OpenAI: they are not conducting science here, even though they pretend to be. It's some weird combination of marketing and big dick contest with them. The first 10-ish pages give a detail-free description of the neural network, and the remaining 65 pages are bragging about how great they are. I've never seen anything quite like it in all the tens of thousands of research articles I've read over the course of my career.

The best description I've heard of it is that it's an extremely complicated autocomplete. The way it (most likely) works is that it reads through the sequence of text a user enters (the prompt), and then begins generating the next words that it deems likely to follow the prompt text. Very much how autocomplete on a smartphone keyboard works. It's a generative model, which means the neural network is probably being trained to learn the mean, standard deviation, and possibly other statistics about some probabilistic generative model (undescribed by OpenAI to my knowldge). There were some advances in LSTM around the time GPT was becoming popular, so it's possible they use a variant of that.
@prologic Dude that is a big ask. I'm not sure I could describe the structure faithfully even if I had the time to do that, because OpenAI sucks and won't publish the structure. Here's the original GPT-3 paper: https://arxiv.org/pdf/2005.14165.pdf . You'll see it's scant on details, which is one of the million criticisms of OpenAI: they are not conducting science here, even though they pretend to be. It's some weird combination of marketing and big dick contest with them. The first 10-ish pages give a detail-free description of the neural network, and the remaining 65 pages are bragging about how great they are. I've never seen anything quite like it in all the tens of thousands of research articles I've encountered over the course of my career.

The best description I've heard of it is that it's an extremely complicated autocomplete. The way it (most likely) works is that it reads through the sequence of text a user enters (the prompt), and then begins generating the next words that it deems likely to follow the prompt text. Very much how autocomplete on a smartphone keyboard works. It's a generative model, which means the neural network is probably being trained to learn the mean, standard deviation, and possibly other statistics about some probabilistic generative model (undescribed by OpenAI to my knowldge). There were some advances in LSTM around the time GPT was becoming popular, so it's possible they use a variant of that.

Hope that suffices for now!
Special Report: Tesla workers shared sensitive images recorded by customer cars | Reuters

In case you didn't realize what a steaming bag of garbage Elon Musk is, and how he turns everything he touches into a pile of same, this report details how Tesla cars record video both *inside* and outside of the cars and upload that to Tesla Inc. Workers there watched the videos and ridiculed their customers.
@stigatle nice!
@prologic OK, so Melanie Mitchell is a fairly well-known AI researcher; she was already quite active when I was a graduate student and has continued to be so ever since. She recently published a book called *Artificial Intelligence: A Guide for Thinking Humans* (disclaimer: I've only read excerpts, not the whole book yet) that is a nice critique of hype around AI. She has a Substack that she updates regularly. The link is to an Apr 3 post where she runs down the firehose of horseshit that appeared in the news and Twitter in the previous week.

1. Chris Murphy, a Senator from Connecticut in the US, tweeted uninformed alarmism about AI. Mitchell responded to him on Twitter, and he had a temper tantrum about it that was noticed by major media outlets. Not looking good for our politicians taking an informed stance on this stuff
2. She also reacted to the bullshit "letter" that the Future of Life Institute put about calling for a "pause" on AI research. She basically concludes it cannot be taken at face value
3. Time Magazine, a major publication in the US, published an opinion piece by a well-known kook in the AI world, a guy who has been claiming for decades that AI is going to kill us all. She calls out Time for being irresponsible like that
4. She gives a few thoughts one what she thinks we should *actually* worry about, as realistic people who aren't mind-poisoned by the AI hype. Amplifying bias, and amplifying misinformation and disinformation

Running through the post is a theme that "Artificial General Intelligence" is a misleading, bad term and is serving the interest of a lot of bad actors but misinforming the public.

Read the post though, it's chock full of countless useful insights, facts and asides that this summary can't possibly do justice to.
@prologic OK, so Melanie Mitchell is a fairly well-known AI researcher; she was already quite active when I was a graduate student and has continued to be so ever since. She recently published a book called *Artificial Intelligence: A Guide for Thinking Humans* (disclaimer: I've only read excerpts, not the whole book yet) that is a nice critique of hype around AI. She has a Substack that she updates regularly. The link is to an Apr 3 post where she runs down the firehose of horseshit that appeared in the news and Twitter in the previous week.

1. Chris Murphy, a Senator from Connecticut in the US, posted uninformed alarmism about AI. Mitchell responded to him on Twitter, and he had a temper tantrum about it that was noticed by major media outlets. Not looking good for our politicians taking an informed stance on this stuff
2. She also reacted to the bullshit "letter" that the Future of Life Institute put about calling for a "pause" on AI research. She basically concludes it cannot be taken at face value
3. Time Magazine, a major publication in the US, published an opinion piece by a well-known kook in the AI world, a guy who has been claiming for decades that AI is going to kill us all. She calls out Time for being irresponsible like that
4. She gives a few thoughts one what she thinks we should *actually* worry about, as realistic people who aren't mind-poisoned by the AI hype. Amplifying bias, and amplifying misinformation and disinformation

Running through the post is a theme that "Artificial General Intelligence" is a misleading, bad term and is serving the interest of a lot of bad actors but misinforming the public.

Read the post though, it's chock full of countless useful insights, facts and asides that this summary can't possibly do justice to.
@prologic OK, so Melanie Mitchell is a fairly well-known AI researcher; she was already quite active when I was a graduate student and has continued to be so ever since. She recently published a book called *Artificial Intelligence: A Guide for Thinking Humans* (disclaimer: I've only read excerpts, not the whole book yet) that is a nice critique of hype around AI. She has a Substack that she updates regularly. The link is to an Apr 3 post where she runs down the firehose of horseshit that appeared in the news and Twitter in the previous week.

1. Chris Murphy, a Senator from Connecticut in the US, tweeted uninformed alarmism about AI. Mitchell responded to him on Twitter, and he had a temper tantrum about it that was noticed by major media outlets. Not looking good for our politicians taking an informed stance on this stuff
2. She also reacted to the bullshit "letter" that the Future of Life Institute put out calling for a "pause" on AI research. She basically concludes it cannot be taken at face value
3. Time Magazine, a major publication in the US, published an opinion piece by a well-known kook in the AI world, a guy who has been claiming for decades that AI is going to kill us all. She calls out Time for being irresponsible like that
4. She gives a few thoughts on what she thinks we should *actually* worry about, as realistic people who aren't mind-poisoned by the AI hype. Amplifying bias, and amplifying misinformation and disinformation

Running through the post is a theme that "Artificial General Intelligence" is a misleading, bad term and is serving the interest of a lot of bad actors but misinforming the public.

Read the post though, it's chock full of countless useful insights, facts and asides that this summary can't possibly do justice to.
@prologic OK, so Melanie Mitchell is a fairly well-known AI researcher; she was already quite active when I was a graduate student and has continued to be so ever since. She recently published a book called *Artificial Intelligence: A Guide for Thinking Humans* (disclaimer: I've only read excerpts, not the whole book yet) that is a nice critique of hype around AI. She has a Substack that she updates regularly. The link is to an Apr 3 post where she runs down the firehose of horseshit that appeared in the news and Twitter in the previous week.

1. Chris Murphy, a Senator from Connecticut in the US, tweeted uninformed alarmism about AI. Mitchell responded to him on Twitter, and he had a temper tantrum about it that was noticed by major media outlets. Not looking good for our politicians taking an informed stance on this stuff
2. She also reacted to the bullshit "letter" that the Future of Life Institute put out calling for a "pause" on AI research. She basically concludes it cannot be taken at face value
3. Time Magazine, a major publication in the US, published an opinion piece by a well-known kook in the AI world, a guy who has been claiming for decades that AI is going to kill us all. She calls out Time for being irresponsible like that
4. She gives a few thoughts one what she thinks we should *actually* worry about, as realistic people who aren't mind-poisoned by the AI hype. Amplifying bias, and amplifying misinformation and disinformation

Running through the post is a theme that "Artificial General Intelligence" is a misleading, bad term and is serving the interest of a lot of bad actors but misinforming the public.

Read the post though, it's chock full of countless useful insights, facts and asides that this summary can't possibly do justice to.
@prologic Happy to but I'll have to do that tomorrow. It's too much to tap out on a phone!
@prologic I thought so too lol
@support No way.
The Limitations of ChatGPT with Emily M. Bender and Casey Fiesler — The Radical AI Podcast

An excellent podcast episode dissecting the hype around ChatGPT and helping to ground us in a more realistic understand what of it is and isn't.
Thoughts on a Crazy Week in AI News - by Melanie Mitchell

I'm a big fan of Melanie Mitchell, and this run-down of hers of the recent AI hypefest is refreshingly clear-headed as usual.
@prologic I wish I knew 🤷 The world is mental
@prologic Substack is kind of a VC-funded, right-winger monstrosity. It doesn't surprise me that they're bad actors. There are a lots of good newsletters on there, but whoo boy, their management sucks and they paid big bucks to draw reactionary writers to their platform.
@prologic Today's date is 4/04
This date could not be found.
Yes, obviously (I hate these titles that are posed as questions when there is a definite answer being pushed, but I thought the interview was illuminated nonetheless).