Advertisement

How AI is polluting our culture

47:34
Download Audio
Resume
Close-up of a person's hand using the Midjourney generative AI image generator in Lafayette, California on May 7, 2024 (Smith Collection/Gado/Getty Images)
Close-up of a person's hand using the Midjourney generative AI image generator in Lafayette, California on May 7, 2024 (Smith Collection/Gado/Getty Images)

AI-generated content online is almost impossible to avoid. There are AI-boosted Google search results, AI-generated imagery, AI-generated articles, AI-generated music, even AI-generated children's TV shows.

Neuroscientist Erik Hoel says we're drowning in “AI dream slop.”

Today, On Point: The cost to our humanity in a world of synthetic culture.

Guest

Erik Hoel, neuroscientist and writer. Author of the Substack newsletter The Intrinsic Perspective.

Transcript

Part I

DAVE ESPINO: Imagine making $1.2 million creating AI generated videos. We're going to show you how in this video.

MEGHNA CHAKRABARTI: That's Dave Espino, host of a YouTube channel called Making Money With AI. In that clip you just heard, Espino's using a company called RV AppStudios as an example of how to make those bags of money.

RV AppStudios makes games and apps for adults, some of which you have to pay for. They also make free games, apps, and online videos for children. And they do that with AI, as Espino says. He describes how RV AppStudios makes the content on one of their kids' learning channels called Lucas and Friends.

ESPINO: Yeah, this channel is mind boggling. It has 917,000 subscribers. This is the type of animation that you can create. Let me show you really quick. And you could create that type of animation using AI, and these are mostly learning videos. So the kid is getting some great value, they're learning all kinds of stuff.

These are for toddlers, as you can see.

CHAKRABARTI: Dave's co-host, James Renouf, emphasizes repeatedly that gone are the days where children's television makers need a huge staff of writers and animators. All you need now is some AI software, and in the quote "writer's room," ChatGPT.

Advertisement

JAMES RENOUF: And I don't want people to say, gosh, it takes, these videos are 30 minutes long.

First of all, these are like not crazy dialogue here, okay? We're not writing Shakespeare, okay? It's like the letter A, the letter B. And you can use ChatGPT to make these little scripts. So say make me a little script where we teach kids ABCs, or we teach them numbers, etc. And then you use the power of AI to make these videos.

You don't have to be a graphic artist. It's amazing what can be done very simply. So little Johnny here, go learn your ABCs while mommy puts her feet up, okay? You create these kind of videos and you can have a channel blow up.

CHAKRABARTI: (LAUGHS) Sorry, as a mommy myself, yeah, sometimes you want to put your feet up, but that's not necessarily why you would need your child to learn the ABCs.

But the point is that kid's content on YouTube these days is a very different beast than traditional educational kids' television. For example, The Sesame Workshop, home to Big Bird of course, has more than 1,000 employees in the United States alone. And among them are an army of childhood learning experts who obsess over the latest research on child development, and they even commission studies of their own, such as a 2021 study Sesame Street commissioned to investigate how the pandemic was impacting young children in particular, and how they could incorporate those findings into their television programming.

Over at RV AppStudios, it's not clear how much research or academic expertise informs how they create Lucas and Friends. Maybe there is, but it's not easy information to find. But it is clear that the company is having an impact, given the kind of metrics that matter most to digital organizations, metrics like 400 million free kid apps download so far, pacing at 15 million more every month.

Their AI generated YouTube videos get millions of views within weeks of being posted. So here's an example of one. This is from Lucas and Friends, their animated YouTube video. And you can't see it, but it's Lucas, a yellow animated animal figure, happily teaching your baby how to wave.

LUCAS: Hello, friends! Wave your hand and say, Hello! Hello! Hello! When you meet someone, you say, Hello! When you meet someone, you say, Hello!

CHAKRABARTI: Joining us today is Erik Hoel. He's a neuroscientist and writer and author of "The World Behind the Word: Consciousness, Free Will, and the Limits of Science." He's also author of a Substack newsletter called "The Intrinsic Perspective." Erik, welcome to On Point.

ERIK HOEL: Thank you so much for having me.

CHAKRABARTI: You were literally grimacing when we played that sound from Lucas and Friends. Is it up there in sort of the Baby Shark level of adult irritation for you?

HOEL: No, I think Baby Shark is so much better. Only a human could write a tune as catchy as Baby Shark.

I first stumbled across this stuff while I was researching the proliferation of AI generated content on the internet. And I wrote about it on my Substack and then Wired ended up doing an investigation into some of the channels. And what I found was that if you look at the actual content of these videos, most of the time there are numerous errors, deep, baked deep into them.

So they'll show a shape, that's a hexagon, they'll say it's a pentagon. They are incredibly formulaic because they have to be. In the end, these technologies, like forgetting about this sort of moral, philosophical question of if we should be entrusting non-human minds with the education of our children, there's still just the practical question of these things get a lot of things wrong.

I was recently testing some of these more frontier models of AI's. I've been teaching my son to read. He turns three in a week. And so we've been going over simple sentences with the most common letter sounds. And so you can create these simple sentences. And it's really the best way, I think, to teach a child to read.

So you say something like Bob sat in mud. And I have to create a lot of these sentences to give him new sentences to practice, and it's this boring task. Perfect, I would think, for an AI to maybe outsource this, too. So I tried honestly using the AI for some lesson planning to give me back simple sentences.

And I asked the smartest AI, which I think is probably Claude Pro, maybe this new recently released GPT 4.0 is slightly better. But at the time that I asked Claude Pro. Claude was basically the leading model and it couldn't do it. It couldn't come up with sentences that are all the simplest sounds because it's something that's not in their training set.

It's a weird ask and in the end, it would say something like, Bob was big. Was? That's not the way an S normally sounds, is it? And even when I pointed out the mistakes to it, it would go back and make the same mistakes over again. And if you can't teach a two-year-old basic letter sounds, how are you going to scale that up to, the dreams of AI supplementary tutors teaching, physics to kids?

CHAKRABARTI: I suppose you could argue that the more complex the learning, maybe the better the AI is at it, but we'll come back to that in a second. It's interesting that you made the example of 'was.'

And the S sound. Of course, that sort of more Z sound is, in American English, the S sometimes makes that sound. But the key thing is the progression in which how a child learns the different kinds of S sounds there are, right? And it stuck the was in there, whereas for an early reader, sat or something like that would probably have been better, right?

So it's understanding how the child learns, which seems, at least in that example, seems to be missing a little bit.

HOEL: And I don't think that the people making these YouTube generated videos are using even the latest frontier models. They're using whatever the cheap, free versions are.

CHAKRABARTI: To that point, we heard them say all you need is an animation, AI program, and ChatGPT. Okay, so Erik, I love doing real time experiments. It drives the staff crazy because they have no idea what I'm going to do. But I have a computer here, obviously, in the studio with us, and I've got ChatGPT.

This is 3.5. Open, it's the free one. Alright, so let's write a script for a kid's animated YouTube short.

HOEL: (LAUGHS)

CHAKRABARTI: So I'm going to say, write a script for a toddler's YouTube video, should we say that?

HOEL: Sure.

CHAKRABARTI: About what? Let's do learning how to read. Okay teach the child. Let's make it a little more specific, to be fair.

HOEL: Let's say, teach them the most common letter sounds.

CHAKRABARTI: Teach the child the most common letter sounds. Should we specify how long the script should be?

HOEL: Yeah, can you say, create simple sentences?

CHAKRABARTI: Okay.

HOEL: That use only the most common letter sounds.

CHAKRABARTI: Sentences. I'll say create 15 sentences. Simple.

HOEL: Sentences that use only the most common letter sounds.

CHAKRABARTI: Letter sounds. Ready?

HOEL: Okay. Live experiment.

CHAKRABARTI: Okay. Oh, it's thinking. Ha! Title card! Let's learn letter sounds with Tim. Timmy, cheerful music plays as video begins. Oh my god. This is actually quite a long script, ChatGPT, a good thing I don't have to pay it. Then Timmy says, Hi friends.

I am Timmy. And today we're going to learn some super cool letter sounds together. Are you ready? Yay. Okay, he claps his hands, and he says let's start with the letter 'A.' Can you say A? Oh, A, it says ah, so that's the letter sound.

HOEL: So already we asked it to create 15 sentences that only used the most common letter sounds.

It didn't create 15 sentences that used the most common letter sounds.

CHAKRABARTI: Oh, there's more though, I scroll, I gotta scroll, oh no, it didn't. It just says the letter sounds, C, cat, D, dog, I'm getting my child YouTube video voice on here, E, elephant, oh where are the, there's no sentences.

HOEL: Yeah. Okay. Yeah, exactly.

'Cause you asked it to do something slightly weird.

CHAKRABARTI: It doesn't seem weird.

HOEL: No. It doesn't seem weird to us. But if you think about what's in their training set, there's probably a huge number of scripts in the training set. But you asked for something very specific. And most of the time, these AIs are just not very good at out-of-distribution sampling.

So what you've been presented with is something that looks impressive. It's a lot of text, right? But it's not actually really grokking the fundamental thing of what you just asked it for.

CHAKRABARTI: Yeah.

HOEL: And now if you imagine the feedback between a confused child and the AI, right? You get this spiral of confusion.

And this is when it's interactive. Most of these scripts for these YouTube videos are exactly like this. They're just the most obvious, what would it be, 'A' for Apple, et cetera, et cetera, et cetera. Getting it to do anything beyond that, it's actually surprisingly difficult.

Advertisement

CHAKRABARTI: Wow. Okay, so Timmy goes on to say, Timmy, I'm actually calling this character like it really exists. Whee! That was fun! Let's do more! And then, you get down to P! Penguin! Claps! Q! Queen! Oh, see, Q is really interesting one. That's really interesting.

HOEL: Queen.

CHAKRABARTI: Queen. Yeah. You're not actually teaching what the 'Q' needs to do.

Do you know when I was in kindergarten, I took a little test and I was asked by the test giver Meghna, say a sound, say a word that starts with 'Q.' And I said cute, right?

HOEL: (LAUGHS) Clever.

CHAKRABARTI: But again, it's like, how does the brain work in terms of processing information? And here's the thing. This is why we've invited you not to use ChatGPT to write new YouTube kids' content, Erik. But you've written extensively about how this kind of quickly generated AI content is everywhere, to the point where you say it's hurting our culture. It's hurting our way to consider, think of ourselves as human beings.

Part II

CHAKRABARTI: Now, I should say, Erik, we got a lot of responses from listeners when we said we were going to talk about the ubiquity of AI generated content and how it's really impossible to find a space on the internet, or growing increasingly challenging, where there's not clearly this sort of synthetic content.

So let me just play a little bit of what some of our listeners said. This is Rachel Chu from Charleston, South Carolina, and she told us that recently she started to notice a lot of this stuff on Facebook.

RACHEL CHU: Just today I saw an AI generated photo of a young girl, like a toddler on the shore of a beach with an oxygen mask lying in the water next to a birthday cake and the caption said something like, "My birthday is today hope I get birthday greetings."

And I guess this is to create comments and likes, and whoever is behind these is trying to play on users' emotions and make them think they can help somehow. I'm not sure if they're making money, but it does feel wrong when there are real people and real causes that do need attention.

CHAKRABARTI: So that's Rachel from Charleston, South Carolina, and here's Eli Hornstein, a scientist who works with plants and reptiles.

He left us a message talking about two recent Google searches that he did. In one, he asked whether there were vegetarian snakes. Oh boy. And in the second, he asked if there were edible bromeliads other than pineapples.

ELI HORNSTEIN: And the only results on the entire internet for those questions were AI generated lies, which are perfectly composed, sound factual, but use the names of real organisms in completely made-up ways, saying that the rainbow boa, which is a real snake, is vegetarian, which it is not. Or that a long list of plants are edible bromeliads when they're neither edible nor bromeliads.

I really don't understand what goes on underneath the surface to produce this type of content there waiting for me, but I'm frankly quite alarmed by it.

CHAKRABARTI: And Eli, my apologies for mispronouncing bromeliads. Okay, so Erik, these are examples about how we already know that sometimes AI can be very factually challenged, let's put it that way.

But you take your analysis or your criticism even further, and you say that our entire culture is becoming affected, as you say, by AI's runoff. What do you mean by that?

HOEL: I think the reason I use analogies like runoff, and I think are appropriate, is that if you look at the history of technological change, there's been these various problems that have cropped up, and one of the most significant ones are issues around climate change, global warming, and also local destructions of environment, and it required a change in thinking in the 20th century.

Where we went from thinking of the environment as this big immutable thing that could not really be injured by us because it's so big, it's so omnipresent, to something that's actually fragile and that we needed to protect, and we needed to enact regulations to protect it. I think the same realization has to happen in the 21st century for human culture.

It's been this big immutable thing. That there is human culture, it's produced by humans, it's the water in which we swim, and it's so big that we don't think anything can really hurt it. At this point, it would not surprise me if 5% of the content online was being produced by AIs. You can go to any leading tweet, or I guess now post, and find the top reply will be something very obviously AI written, once you hear that cheery Wikipedia voice of ChatGPT, you will find it everywhere.

Even on my own blog, I've had to ban people for just posting AI comments just for engagement. And it's because there's this economic incentive. We are a cult, we are a content hungry economy and the ability to create cheap content, even if it's not good, like even if the quality is much lower than a human, if it's orders of magnitude cheaper, there's just pure economic reasons to pursue that.

CHAKRABARTI: So then, so let's get back. Let's look to understand your concern here. I want to have some shared definitions just so that we all were talking about the same things when you talk about culture. It is, because of its ubiquity, it's an amorphous concept. And, obviously there are millions of various cultures and many more microcultures, etc.

So what, how are you defining what culture in this case is?

HOEL: Everything you see online, everything you read, everything you watch, let me give a brief example of this. Sports Illustrated, right? Culture, right? Okay. It's being produced by humans as content for humans, but, they were recently caught using fake AI writers to create their articles and because there's a clear economic incentive for them to do that.

And that, there is a possibility where when I was born, everything I saw, everything I read, everything I watched even the lowly labels at a grocery store were thought over and created by human minds, and it's very possible that I will die in a world where the vast majority of the things I read, see, or watch are not created by human minds.

They're created by unconscious artificial neural networks. And I think that is the real immediate risk of AI because it's what we're already seeing. Again, you can go out on the internet and find all these numerous examples of that. And there's this creeping weirdness to changes.

And let me give like a little brief story about how this seeps even into the real world. As I said, my son Roman is turning three, so we're going to host a birthday party, and it's Curious George themed. So we got all these Curious George stickers that we ordered online, and we were about to put them all into the little packs for people to take to go, right?

And my wife is looking through them, and she comes to me, and she says, These are very strange, some of these. And some of them are fine, and some of them are like, Curious George holding an automatic rifle, Curious George without skin, Curious George OD'ing, bi Curious George holding a banana evocatively.

It's just obviously not safe for work content. And if any human had been in the process of making these stickers, and I don't know, I have no evidence that they did use AI or if it's just people, somewhere in China, a company in China who doesn't know what Curious George is and doesn't really care.

And it's just pulling stuff. But the point is that when you create culture algorithmically, you begin to run into these scenarios where clearly there was no conscious thought behind this at all. And that's only going to continue. There's going to be this creeping alien-ness to our culture.

CHAKRABARTI: Yeah. Algorithms are the perfect rule followers, right?

So it's following the sets of rules given to it. But there's no discernment about whether what it's producing is good, bad, appropriate, that kind of thing. As long as it fits within the framework of the rules given. Yeah. If you've noticed like this an increasing high strangeness to some things, it often is either AI or algorithmically produced content.

And I think at this point, there's not a huge distinction, but soon most algorithmically produced content will be fully AI generated.

CHAKRABARTI: Okay. Okay, so I want to get some more examples here. Now that I have a better understanding of what you're talking about when you say culture, we started by giving the example of the increasing amount of AI, either enhanced or generated kids content, right?

Say on YouTube, and other issues with that are what there's oftentimes no narrative cohesion to what they see. Yes, you said it's getting things completely wrong sometimes, that is troubling, but it also has great reach. On the other end of the spectrum, you say that this sort of AI, not only generated content, but then there's a feedback loop that happens when more AI generated content's getting out there and then AI is learning from that content that it made.

And you say you can see that even in the scientific literature?

HOEL: There's this phenomenon, which is little discussed, but very well known, called model collapse. And if you look at, the funny thing is that in a way, the companies and I have common cause in that most of these companies and us have common cause, in that these companies don't want to train their latest version of their AI on their old AI's output. They don't like that. They don't want that. You might ask, wait a minute. Why wouldn't you want that?

And it's because of this issue of model collapse. And what happens is when you take a model and you start feeding it its own data, it eventually collapses. Researchers have compared it to getting mad cow disease. Because of course for mad cow disease, the cattle eat the brains of other cattle.

And in this case, it's they're eating their own, they're eating their own dog food. And they begin to collapse in on themselves and you trigger either something that looks like schizophrenia or just incredibly simplistic outputs or so on. And you have to think about how strange that is, that the companies don't want their AI generated products, even really in the training of their next generated model, but it's fine for us to consume them, right?

There's this strange hypocrisy baked into the whole thing. And that's because these systems, they are trained on our data, and they are very impressive in some ways, as much as I sometimes point out their simplistic reasoning flaws, at other times I play around with them.

And I think this is like world changing. This is crazy. I can't believe that I'm living in a world in which this sort of thing is possible. So it flips to me like a Necker cube, like an optical illusion where I can see it one way. And then sometimes I see it the other way. And I do think that, in the end, we're going to have to start making some decisions about to what degree do we put limits on this?

They themselves are very protective of their own models. We ourselves are neural networks, biological neural networks. Should we be concerned about if culture stops being just 5% and it's just funny Curious George stickers and it starts being most of the content that you read or see, or a huge amount of what's posted online. That seems to me to be the immediate concern.

Other people are very concerned about AI, but they're often concerned about things more like existential risk. These more sort of sci fi scenarios

CHAKRABARTI: Oh, the world coming to an end.

HOEL: The world coming to an end.

CHAKRABARTI: Terminator scenario.

HOEL: Yeah, exactly. The Skynet scenario. But there is an effect that's happening right now, which is that the internet is getting filled up with junk because of the economics of it.

Back in 1968, Garrett Hardin wrote a very famous article in Science that was instrumental for the environmental movement, and in it he coined the term, a tragedy of the commons. And getting people to think that way, that there was this commons. That needed to be protected, that you couldn't just say a chemical plant wants to make money and so they can just go pollute this river. Like, no, you actually can't do that.

You're damaging the commons in a particular way. And I think human culture is a commons, like even Curious George stickers are a commons, right? Like I expect my Curious George stickers to be okay. And this AI creates this fundamental mistrust, and I don't think we should necessarily throw out the entire technology or anything, but we need to start putting in similar sort of pressure and regulatory guidelines that we did for actual physical pollution.

CHAKRABARTI: We're going to talk about that a little bit later in the show in detail, Erik, but I want to lean on your academic expertise as a neuroscientist, right? Because there's our common experience of culture, which you're already saying we should be thinking about, or concerned about AI's impact on that.

But I'm also wondering about just how we as human beings, how our brains absorb this information or take that cultural feedback. AI hasn't, at scale, has not been around long enough, I would say, to have any sort of real robust study on this question. So I want to put that out there.

But you do quote in your Times piece, you quote Einstein actually, right? And you say that Einstein once said, let me see if I can find this here. About if you want to really teach your, oh yeah. If you want your children to be intelligent, read them fairy tales. If you want them to be more intelligent, read them more fairy tales.

So what is, why'd you use that quote?

HOEL: That's such a great question. What's funny is that this connects to this issue that I've been fascinated with ever since I was young, you mentioned in the introduction that I grew up in my mother's independent bookstore, the Jabberwocky, which is here on the east coast.

And so I was always surrounded by fictions. And I was also interested in science and neuroscience. And at some point, I began thinking what is the purpose of these things, right? One could imagine a race of aliens who are like literalists, who are like, why do you care about Harry Potter?

Everything about Harry Potter is a lie, right? Everything that happens in Harry Potter is a lie. You people seem to care massively. And the common explanation, which is normally given by evolutionary psychologists. This would be something that Steven Pinker would probably say, which is that fictions are just, the fictions of our culture, the stories of our culture are just the super stimuli, and we just like them for the same reason we like cheesecake.

In fact, I think Steven Pinker once said that music was auditory cheesecake. And I always thought that can't be right. And one way in which I think that's not right is that if you think about humans as a continuously learning neural network, we need to sample things that are outside of our day-to-day distribution in order to generalize our learning.

And so this is now getting more theoretical. So I introduced this hypothesis called the overfitted brain hypothesis. And the idea is that during your day-to-day learning, you're becoming very statistically fitted to what you're doing, and you need something to shake you out of that, and probably that's one of the reasons why dreams initially evolved, but also one of the reasons why we tell stories, and we tell fictions.

We talk about things that never happened and couldn't happen. And these things probably are cognitively important to us. So it's not just cheesecake. It's not just some super stimulus that we're attracted to because there's heightened emotions or because there's lots of action. We're actually getting them.

Maybe something fundamental out of human culture for our brains themselves, for our learning. And then if you take that view of things and you begin asking, Okay, so what are the effects going to be of filling up our culture with text that's the most obvious continuation. All these sort of properties of these artificial neural networks. And the answer is, we might start damaging this really fundamental thing that I think humans have relied on, which is having an enriching culture that allows you to generalize your day-to-day learning.

CHAKRABARTI: Did I hear correctly? Did you say brain micronutrients? Or is my brain remembering that from your article?

HOEL: That's from the article. Okay. And yeah, so I think stories contain within them cognitive micronutrients. You could call them something like that. And, I'll caution listeners that if you think neuroscience is a set of well-established facts, I have unfortunate news for you. Neuroscience is like a bunch of competing narratives and hypotheses, and this is one of them. But it does say that if we are entering this unknown space where we don't know exactly what the risks of letting go of the control of our own culture. And I think that's a microcosm of the problem with AI generally, which is this problem of do humans maintain agency?

Yeah, if we don't maintain agency over the content that we create, right? What are our chances of maintaining agency in the long run?

Part III

CHAKRABARTI: We heard from a lot of folks, so I just want to play some listener perspectives more. This is Jin Jo Garten from Norman, Oklahoma. And she says she's really growing frustrated with what she sees on Facebook, particularly regarding Facebook groups that purport to be part of her community.

JIN JO GARTEN: I'm Native American. I'm Chickasaw. When I'm looking at Facebook and I see all of these different, quote, Native American groups. One's called Native American Culture. There's Indian this and Indian that. And it's every time I look into the creators of those sites, on Facebook in particular, they're always from other countries.

And it always disturbs me. And it keeps me from being able to participate in my own culture's social goings about. Because I'm not sure about that site and I don't trust them. I don't trust any of them now.

CHAKRABARTI: So that's Jin Jo from Norman, Oklahoma. Here's Avalon, who's an artist in Hawaii.

AVALON: Even someone like me who has a very good ability at pattern recognition has, I have actually been almost tripped a couple times into thinking these were photos of or not, and that really worries me in terms of how we interpret reality.

I personally only use Instagram in terms of social media, and I have been trying to train the algorithm so that I don't see AI art by blocking anything that is hashtag AI, AI art, AI, et cetera, et cetera. Anything with AI, basically, I'm telling Instagram to block for me.

CHAKRABARTI: And here's one more. This is Heather from Florida who's a middle school language arts teacher and she's wondering about teaching writing to kids now, when she fears that all they'll be doing in five or ten years is working with AI anyway.

HEATHER: If we've got AI in the workplace, basically writing press releases or emails, articles, how does that impact what we're teaching in our classrooms with authentic writing and evidence and data? If when these kids get into the workplace, all they're going to have to do is work with AI. So that's just a couple more of the significant number of responses we got to this topic about AI and its impact on our common humanity and culture.

CHAKRABARTI: So Erik, here comes my gentle pushback. Okay? Because you write in your article that we're now already really consuming a lot of, I love this phrase, AI generated dream slop. As you put it, and you say, we find ourselves in the midst of a vast developmental experiment. And you talked earlier about how it could have an impact on our agency.

We're losing, we could lose control over what creates and what signifies our culture. My pushback to that is, is that not similar to every advance in technology, the fears that come along with it? So for example, I'm going to go way back here. When the printing press was invented.

Right? And suddenly printed text became far more easily available, literacy rates notwithstanding. There was a genuine fear amongst the people who did have control over information and written information, that all of a sudden, they were going to lose that control. All sorts of crud was going to be able to be written and printed and spread about the masses.

And so therefore it would signify the beginning of some kind of social rot. How is that different? That fear different from what you're talking about now?

HOEL: I think it's a very fair question. And I hope that all of my concerns are incorrect, right? Like I have kids. I want them to grow up in a world where they experience culture, at least to some degree, in the same way that I did.

But there are aspects of AI that make me think it's very unlike previous technologies. And let me list two reasons. One is we still don't know exactly how and why it works. AI's are incredibly massive black boxes. And then two, just in terms of their functional abilities, we still don't even know what the upper, upper scale is, and there's this whole sort of debate going on in AI right now called the scaling hypothesis.

Can you just keep scaling up the architectures and systems that we currently have? And will they get smarter and smarter? Now, if they don't, then I think if they cap out around just like a generalist human and continue to make little hallucinatory mistakes, then I think that we still have a very viable future for human culture and human content. If these things get so smart that you can just click a button and it produces perfect content, and it's so much economically cheaper, then it's very easy to see a world in which there are no more human writers, there are no more human artists, and so on.

Because there's no actual way, there's barely a way to make a living doing that stuff now. And so there's just this very clear path to that from the technology that we have. And then also this technology is so unlike previous technologies. When you're using a tool, this is an analogy that I use, which is that at a certain point, the tool can, if it's powerful enough, it can overtake you.

If you've ever tried to use AI to write something, it just spits out the entire thing. Now, you can then go and edit it, but it's very hard to use it in a way that feels morally responsible, in a way that doesn't feel like cheating. And that is implicit and intrinsic to the technology.

CHAKRABARTI: Yeah. Okay, so I will say that many people are perfectly happy to actually write and edit that way and don't feel like it's cheating.

But what you're saying here links back to the conversation we had yesterday. This week, Monday and Tuesday, I'm calling our informal, spontaneous miniseries on AI, because we talked to David Autor from MIT yesterday, and he's a labor economist, and he's been historically skeptical about the impact of technology on the wages for working Americans.

He feels differently about AI, actually, and he talked about yesterday how he sees first of all, when it comes to any vocation that involves creating essentially what could be seen as intellectual property, right? Writers, musicians, that kind of thing. He is quite concerned. So he shares your concern on that point.

But for everyone else, he says, look, AI is going to be the tool that shrinks the difference between sort of middle class and working-class Americans and people who previously held, the high priests of expertise in our world. Because the expertise was going to be given or facilitated by the AI.

And he says, I'm not so concerned about that anymore because ultimately still, human decision making is going to have to be part of the process. In terms of what AI creates and how it's used. So therefore, it's not going to end up being the dream slop nightmare scenario that you're concerned here, concerned about with culture.

Now, again, he's talking about the labor market, but I wonder what you think about that, that no matter what we do or how good AI gets, people will still be making decisions that help control what the AI is generating.

HOEL: Yeah, I do think that there's this notion that AI is gonna take all of our jobs, right?

I myself am not a proponent of that. I think that there's all sorts of jobs that even if the AI could do it better, people will fundamentally want a human being to do it. And an analogy that I give sometimes is that people still play chess professionally. Chess has been solved by computers for 20 years now, right?

And some people make quite a living playing chess, right? And there's still a lot of interest in the sport. But it's still, even there, I think his name is Lee Sedol. He was the Go world champion who played the first big deep learning system that could play Go, which is a game similar to chess.

And he quit after the machine beat him. Because he said there now exists an entity which can never be defeated. And as rosy as I'd like to think, maybe we always do want humans in the loop. That's still this huge gray zone. And there's also this issue of just, even if the content is not as good without human oversight, the people still don't quite understand the scale, the economic scale at which I can get ChatGPT to generate a thousand student essays for dollars.

So the scale is such that, and this is where I think the tragedy of the commons comes in, which is just that we've created this system where even if it's not better than humans, you can just create so much cheap content. And there's such a demand for cheap content and that's what all these people are reporting and seeing online.

CHAKRABARTI: Yeah. Okay, so let's hear from two more and then I want to get back to your analogy of environmental law and regulation. So this is Elliott Hetzer from Piqua, Ohio. Elliott's a church music director. And he says he uses AI actually to help him with creating music for his church.

It's cost effective to make musical arrangement this way, but he also does worry about the implications.

ELLIOTT: I appreciate the ability to do things cost effectively, but also as a musician, that occasionally does outside session work, I could see myself being easily replaced. First, I have to worry about being replaced by another human being that might potentially be better or maybe potentially cheaper, any number of things.

But now I have to compete with AI in that same position, which is now potentially, I don't want to say free because obviously you have to like, especially like the recording programs, you do have to pay for it. But now I have to compete with potentially losing my job to a computer.

CHAKRABARTI: Here's Lukas Ringland, who called us from New South Wales, Australia.

LUKAS RINGLAND: I don't think about AI as being a fundamental shift. To me it just reveals in starker clarity the power dynamics that have already been generating the content that I have access to in my life. When I receive any kind of communication, say, about a new drug, I have complete clarity, at least as far as I'm concerned, that there is this disconnect between the people who generated that content, the marketing communications team, and the team that is actually looking to solve some kind of medical issue for me.

And you realize when you dig into it that we are surrounded by that kind of information. That is the slop.

CHAKRABARTI: That's Lukas from Australia. Okay, so this brings us back, then, Eric, to this central analogy that you've been presenting to us, and that is, with the tragedy of the commons, in terms of our physical environments, pollution and the destruction of communities and natural habitats.

Major issue that what we still have with us. But as a society, we decided really the only way to, and the only way to attack that, or to control that, was through changes in policy and laws and regulation. And you say AI's cultural pollution, that's a phrase from your writing, is something similar.

And that we can't rely on the natural evolution of technology or the people making it to restrain the expansion of that cultural pollution. So what should we do?

HOEL: Here's a really good example of that. We have relatively good techniques for watermarking AI generated content. Right now, the companies will say that they do watermark, but what they mean is they put something in the metadata, or they put a little tag on the side of the image.

And metadata, by the way, is stripped when you upload it to social media automatically. It's not much of a watermarking. What you can do instead is basically an evolved version of what people like teachers and professors are already doing, where they give their students an essay prompt, and they put in very tiny text, put in the word banana somewhere in the text, right?

And the student copy pastes, and the prompt gets big when it goes into the AI prompt system, and the AI puts banana somewhere in the text. Now this is like a home brewed watermarking feature, but you can do the same thing. We have good ways to do the same thing. Like one simple example way to imagine it would be, imagine the AI generates an image and there's two pixels somewhere that have the exact inverted mathematical color, like exactly precise, right?

A human would never notice that. But if you ran it through like a detector, it would say, Oh, there's this like mathematical impossibility that a human creation would never stumble upon.

CHAKRABARTI: Okay. So to be clear, you're not at this point in time, you're not even asking for sort of a restraint in where and how AI is used.

You're just asking for legislation, I presume, that would force companies to put some kind of detections.

HOEL: Exactly. Right now, we can't detect AI outputs reliably without them doing something on the prompt side to bake in something to the output. And we know that the technology exists for them to do that.

And there are good like inroads to it, but the companies don't pursue it. I think because fundamentally it hurts their bottom line. Because most, a huge number of people are paying the $20 to OpenAI every month to make, to write their essays in college, right? And the second that OpenAI says we're going to start watermarking text this huge market of like maybe unclear if it's illegal, semi shady sort of stuff vanishes, and that is like a massive economic blow to them.

So I think that is a great and perfect example of the first step should just be enforcement of watermarking, especially from these frontier models, because we know that they have the capability of doing that.

CHAKRABARTI: And you're calling this a Clean Internet Act, a la the Clean Air Act.

HOEL: That would be ideal.

This program aired on May 21, 2024.

Related:

Headshot of Hilary McQuilkin

Hilary McQuilkin Producer, On Point
Hilary McQuilkin is a producer for On Point.

More…

Headshot of Meghna Chakrabarti

Meghna Chakrabarti Host, On Point
Meghna Chakrabarti is the host of On Point.

More…

Advertisement

More from On Point

Listen Live
Close