Sponsor MessageBecome a KQED sponsor
upper waypoint

Beyond the AI Hype Machine

Save ArticleSave Article
Failed to save article

Please try again

White, purple, and black robot hands hover over a laptop. The laptop is warped and appears distorted. The laptop screen shows that the laptop has crashed. These components are layered over a purple and green background featuring keys labeled “CHAT AI BOT.”
Composite image of robotic hands hovering over a laptop showing an error screen.  (Composite by Gabriela Glueck; photos by Rawf8, MASTER, and Vertigo3d)

View the full episode transcript.

When ChatGPT launched in 2022, it kicked off what some have called the “AI hype machine” — a frenzy of promotion and investment that has sent some tech companies’ valuations soaring to record heights. Meanwhile, computational linguist Emily M. Bender and AI researcher and sociologist Alex Hanna have proudly worn the titles of “AI hype busters,” critiquing the industry’s loftiest claims and pointing out the real-world harms behind this wave of excitement. What began as a satirical podcast is now a book, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. In this episode, Alex and Emily explain why the very term “AI” is misleading, how AI boosters and doomers are really flip sides of the same coin, and why we should question the AI inevitability narrative. 


Guests: 

  • Emily Bender, professor of linguistics the University of Washington
  • Alex Hanna, director of research at the Distributed AI Research Institute

Further reading/listening: 

Want to give us feedback on the show? Shoot us an email at CloseAllTabs@KQED.org

Follow us on Instagram

Sponsored

Episode Transcript

This is a computer-generated transcript. While our team has reviewed it, there may be errors.

Morgan Sung: A few weeks ago, OpenAI launched an app, Sora. It’s a vertical video social platform, similar to TikTok, except all the videos are generated by the company’s AI image generator, Sora 2. Within days, the app was a copyright infringement nightmare. There were videos of SpongeBob cooking meth, unsanctioned Rick and Morty ads for crypto startups, and many, many videos of open AI CEO Sam Altman doing depraved things to copyrighted characters. Like the one where he brutally barbecues and carves up Pikachu. 

[AI CEO Sam Altman] Pikachu on the grill here. It’s already got a beautiful char and it smells like somebody plugged in a chicken. Let’s give it a flip. I’m gonna carve it into some thick steaks. Look at that. Crust on the outside, pink and juicy in the middle. Cheers. 

Morgan Sung: All of these 10 second videos require an immense amount of computing power, which is extremely costly to maintain. In a blog post, Sam Altman admitted that the company still needs to figure out how to make money off of Sora. He wrote, “People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.” Facing heat from copyright holders like Disney and Nintendo, Altman also announced extra guardrails for the app to curb infringement. Now, users are complaining that everything they try to generate using Sora 2 gets flagged as a violation of the copyrighted content policy. They’re already getting bored of the app. This whole cycle has been described as the AI hype machine. Big investments are made based on big promises of innovation, disruption, revolution. This hype fuels more investment, which, in turn, fuels the hype. The cycle continues when a new product launches. Meta, for example, launched its own AI social video app, called Vibes, last month too, which was quickly forgotten about when Sora launched. 

Alex Hanna:  AI hype is effectively premised on fear of missing out. It is the fear that if you don’t get onto this new technology, you are going to be left behind. 

Morgan Sung: That’s Alex Hanna, a sociologist and the Director of Research at the Distributed AI Research Institute. 

Alex Hanna:  If you’re a corporate manager, you’re going to have your competitors just leave you in the dust. If you are a teacher, you are doing a disservice to your students by not preparing them for the job market of the future. If you were a student, you were going to miss out on all the skills and all your classmates are going to be outperforming you. And as a worker, you will be doing things the old way, the analog way, and everyone is going to be outpacing you. 

Morgan Sung: Alex and her co-author, Emily M. Bender, recently published a book, The AI Con, How to Fight Big Tech’s Hype and Create the Future We Want. Emily runs the computational linguistics program at the University of Washington. This is a field of study that combines human language with machine learning. 

Emily M. Bender:  I often get asked the question, well aren’t you worried that students are going to get left behind? Etc. And my answer to that is often, where is everybody going? Like, this metaphor of left behind suggests that people are running off into some brilliant future. I just don’t see it, you know, setting aside the fact that the technology doesn’t do what it’s being sold to do, but that is overhyped and over promised. The idea that we’d be better off with instead of interacting with people at all stages, interacting with screens that that’s just not the future that I want. 

Morgan Sung: In this episode, we’re talking about the AI hype machine, when it started, how it’s fed, and why a growing corner of critics say they see right through it. 

This is Close All Tabs. I’m Morgan Sung, tech journalist and your chronically online friend, here to open as many browser tabs as it takes to help you understand how the digital world affects our real lives. Let’s get into it. 

All right, like we always do, we’re starting by opening a new tab. What is P-Doom? In their book, The AI Con, Alex and Emily talk about these two groups. There are the AI boosters, the people who are optimistic that AI will pave the way to our utopian future. Then there are the AI doomers: the people that catastrophize, and believe that AI progress will usher in an era of societal collapse and human extinction. It’s very Matrix. 

[Clip from the film “The Matrix] The Matrix is a system, Neo. That system is our enemy. 

Morgan Sung But before we break this down further, let’s start by defining our terms. Here’s Emily. 

Emily M. Bender:  Artificial intelligence does not refer to a coherent set of technologies, and it has throughout its history, since it was coined by John McCarthy in 1955, basically been used to sell this idea of some magic do-everything technology in order to get money. Initially, it was research funding and then DOD money and now a lot of it is venture capital money. 

Alex Hanna:  Yeah, and the way that this has proliferated in the modern day is that so many things get called AI. So that could be automated decision-making systems used for determining whether someone gets social services. And so that gets looped in, and then we also get recommendation systems, things like the TikTok algorithm, the Instagram Reels algorithm, pick your short-form video. But then, it’s really manifest in these large language models and diffusion models that are looped into the category of generative AI.

Morgan Sung: You start this book from this one moment in 2023, when Chuck Schumer at the time, the Senate majority leader, held a series of forums around AI. Can you take us back to that moment and like set the scene for us? 

Alex Hanna:  So, late 2023, Chuck Schumer is convening the eighth of total nine Senate Insight forums around AI, and he asks folks, this is very weird, he asks, “what is folks’ probability of doom?” And this is abbreviated as P(doom), and for this instance, it’s an audio platform, that is P, open parentheses, doom, closed parentheses. And he also asked, “what people’s pee hope is.” So this means what is your probability that there’s going to be some kind of a doom scenario, in which through, you know, hook or crook, some kind of thing called AI is going to outperform or outsmart humans and take over and lead to human extinction. And in the book, we start and we say, well, this is the wrong question. But also if you’re looking at harms that are happening in the here and now, there are many that exist, whether that be deep fake porn being made out of non-consensual adults and children, the use of automated decision-making and weapons targeting, especially in Gaza, and then we also talk about students having their exams effectively being judged by these automated tools. So talking about P(doom) in this register is asking the wrong question and focusing on the wrong things. 

Emily M. Bender:  But oftentimes it looks like the doomers, the people with a high P(doom) value, the people who take that question seriously in the first place, um, and the boosters, the people who say this is gonna solve all our problems, are like the opposite ends of a spectrum. And that is how these people present themselves, it is how the media often presents what’s going on, and it is very misleading. I think that one of the points that we make is that doomerism is another kind of AI hype, because it’s saying, our system is very powerful. It’s so powerful, it’s going to kill us all, is a way of saying it’s very powerful, but also we make the point that the doomers and the boosters are two sides of the same coin. And it, I think, becomes very clear if you look at it this way, which is to say, the doomers say, “AI is a thing, it’s imminent, it’s inevitable, and it’s gonna kill us all.” And the boosters say, “AI’s a thing, it’s imminent, it’s inevitable, and it is gonna solve all of our problems.” And it’s pretty easy to see these are the same position with just a different twist at the end. 

Alex Hanna:  And the funny thing about this boosterism, doomerism dichotomy is that these are many of the same people or they run in many of same circles. So, you know, there was this document that was put out called AI 2027, in which it ends with humanity dying and the kind of choose your own adventure. There’s only two endings here. The choose your own adventure and one of them, you know everyone dies.  But the lead author of this works at OpenAI. And there’s many such cases of people who are working on quote unquote, “AI alignment”, who are in these industries. So, it’s again not as if they’re against the building of AI, or we should just say no, it’s actually a very narrow segment of people. 

Morgan Sung: You described this industry as the AI hype machine, the modern AI hype machine, what does it look like? I mean, who are the players? 

Alex Hanna:  Yeah, I mean, the players are many of the big tech players that we know. So Microsoft, Google, Amazon, Meta, but with some new entrants, OpenAI being the most significant one. Um, and along with OpenAI, a few offshoots, so Anthropic is kind of the most notable one. And then the company that’s creating the shovels for the gold rush, so that’s your Nvidia, and then your Taiwanese semiconductor manufacturing company, abbreviated as TSMC. 

Emily M. Bender:  I want to say that we see AI hype not just originating from those big players, like that is a large source of it. Also we hear over and over and again about people working in various businesses being told by their higher ups that they have to try this new AI thing. And so there’s this sort of secondary promulgation of hype that comes from middle management and up that have been sold on the idea that this is going to, you know, really increase productivity. And, you know, on the one hand, it’s a very useful excuse for doing layoffs that they may have otherwise already wanted to do, but then on the other hand, some people seem to have really bought into the idea. So they tell the people working for them, you have to spend time figuring out how to make yourself more productive by using these so-called AI tools, because everyone’s telling me that that’s the way of the future. 

Morgan Sung: I mean, the obvious way people are, these players, are feeding into the AI hype machine is by extolling the virtues of AI, or, you know, kind of spreading this very doomerous sci-fi rhetoric. But what other strategies are being used to feed this machine? 

Emily M. Bender:  So one important strategy is what I sometimes call citations to the future. So people will say, yeah, yeah. It’s got problems now, but it’s going to do all of these things. And I think it really is the only technology that we are expected to evaluate based on promises of what it will be doing, right? That car that I just bought only gets, you know,  35 miles to the gallon. But that’s OK, because the later one’s going to get 50. We don’t talk about it that way, except with the so-called AI technology.

So, citations to the future is one big strategy and another one is anthropomorphizing language, talking about things that have happened as if the computer systems themselves did it of their own volition and autonomously instead of people having used the system to do it or done something in order to build the system. So it’ll be something like, AI needs lots and lots of data. Well, no, people who want to build the system that they’re calling AI are amassing lots and lots of data in order to build them, or AI is thirsty, it needs lots of water, or AI was able to identify, you know, something in a blurry image. It’s like — in no sense, right?

People used XYZ tool in order do a thing, or in order to build these tools, they are using lots of of water and so on. So this anthropomorphizing language sort of shifts the people out of the frame and hides a bunch of accountability, and at the same time, makes the systems sound cooler than they are. 

Morgan Sung: Alex and Emily also pointed out that players in the AI industry push this adoption of AI into our everyday lives by really trying to humanize the product. We’re gonna dive into that in a new tab. First, a quick break. 

Time for a new tab! Are we really just meat machines?

Let’s talk about the technology itself, like the way people talk about large language models as AI, um, ChatGPT, Claude, Grok. Many people understand that these models are basically predicting the words that most often go together. But can you break it down further? Like, what’s really going on under the hood there? 

Emily M. Bender:  So, the first very important lesson is that when we say word, we’re actually talking about two things. We’re talking about the way the word is spelled and pronounced and what it is used to mean. And one thing that makes that hard to keep in mind is that as proficient speakers of the languages we speak, pretty much anytime we encounter the spelling or sound of a word, we are also encountering what the person using it is using it to talk about. And so we always experience the form and meaning together. But a language model, so that the core component of something like Gemini or Grok or Claude or ChatGPT is literally a system for modeling which bits of words go with which other bits of words in whatever the input collection of text was to create that model. And so what we have are models that are very good at putting literally like spellings of parts of words next to each other in a way that looks like something somebody might say. 

Morgan Sung: Emily and Alex have come up with a few phrases that illustrate what large language models really are, which also describe the limitations of this tech. We’ve got synthetic text extruding machine. 

Emily M. Bender:  The choice of the word extrude is very intentional because it’s a little gross. 

Morgan Sung: Racist pile of linear algebra. Spicy autocomplete. And one phrase that really took off, stochastic parrot. Emily coined the phrase in a research paper she co-authored in 2020. Parrots can mimic human speech, but whether they can really comprehend it, that’s dubious. Stochastic comes from probability theory. It means randomly determined. So a stochastic parrot essentially mimics language in a random order and does so convincingly, but it doesn’t understand it. 

Emily M. Bender:  Starting with OpenAI’s GPT-2 and GPT3, they were using it to create synthetic text. And so one of the things we worried about in that paper is what happens if someone comes across synthetic text and doesn’t know that it was synthetic? What we didn’t realize at the time is that people would be happy to look at synthetic text while knowing that it’s synthetic. That is very surprising to me. And so the phrase stochastic parrots was this attempt to make vivid what’s going on, to help people understand why the output of a language model run to repeatedly answer the question, what’s a likely next word, is not the same thing as text produced by a person or group of people with something to communicate. And what’s happened, it’s been fascinating as a linguist to watch that phrase go out into the world, so for the first little while, it was people referring to the paper, and then it sort of became people talking about, um, that claim that large language models are not understanding, they’re just repeatedly predicting a likely next word. And then it got picked up or interpreted as an insult, which is surprising to me because in order for it to be an insult, the thing that it’s being applied to would have to the kind of thing that could be insulted. 

Morgan Sung: Then in 2022, Sam Altman tweeted, I am a stochastic parrot and so are you. 

Emily M. Bender:  I think what happens when Sam Altman picks it up and tweets that is that it is, on the one hand, sort of an attempt to reclaim what is understood as an insult or slur by people in that mindset, but also, and very importantly, it is about minimizing what it is to be human, so that he can claim that the system that he’s built is as good as a person.

Morgan Sung: Emily and Alex say this concept of comparing humans to, essentially, flesh machines is a classic move in the AI hype machine playbook. It’s reducing humanity and what it means to be human to programming, like Eliza in the 60s. Eliza was an early natural language processing program designed to mimic a therapist. Think of it as a great, great, great, grand chatbot of ChatGPT. A lot of people, from academics to government leaders to tech industry giants, bought into the Eliza hype. And that freaked out Eliza’s own creator, Joseph Weizenbaum. In a book he published in the 70s, Weizenbaum warned that machines would never be able to make the same decisions that humans make because they don’t have human empathy. His criticism of AI caused a stir in the research community. And decades later, AI boosters are still making that same claim. That humans and machines aren’t that different. But what does this devaluing of humanity really mean for us? 

Alex Hanna:  Yeah, I mean, it means a lot of things. It really seems to emphasize that there is, kind of, aspects of human behavior that can just be reduced to our observable outputs, right? Humans are just things that output language or output actions, when that’s not true. Humans have a much more vivid internal life. Um, we think about others. Uh, we think about, kind of, co-presence, but it’s more about saying how we’re comparing ourselves to machines that are programmed by people and those people in those institutions have particular types of incentives to make machines that behave as such. So that’s the kind of implications that it has and it also has the implications of other kinds of moves into humanism, dehumanization and what that does and how we treat people and with regards to dignity and propriety of rights. 

Morgan Sung: Can you also give concrete examples of where we see this kind of, uh, devaluing of humans? 

Emily M. Bender:  So I think if we say that humans can be reduced to their outputs, that that leads to lots of problems. And one is we end up saying, you know, the form of, or the words that teachers and students say in the classroom is the learning situation. And so we can replace the teacher with a system for outputting words and then those students will get as much and maybe it’ll be personalized and it’ll better. And that is dehumanizing to teachers clearly and also to students because it removes, you know, everything that is about the student and teacher’s internal life and about their relationship and about their community from the situation. But I think it’s also really important in terms of the workforce more generally, that basically if we say, well, humans like large language models are systems for outputting words, then it’s a very small step to basically saying the whole value of this person is how many words they can output and doing a very, very dehumanizing work environment to people. 

Alex Hanna:  We also see this in other domains like the Amazon work floor and the ways that these mini robots flit from place to place and the so-called quote unquote pickers.  People on Amazon work warehouses have to pick things and then deliver them. So there’s a lot of implications for that and I think also in seeing the humanity in other folks and how we treat other folks. You know, if they’re merely meat machines, then what does it say about how we view them with respect to, kind of,  personal rights and human rights and what kind of rights they should be afforded? 

Morgan Sung: This idea of human beings just being walking meat machines is chilling. It definitely creeped me out. What are the other real world consequences of this thinking? Let’s open a new tab. Who’s really harmed by AI hype? Alex and Emily have said that their goal with writing the AI con is to reduce the harm caused by AI hype. Automation, for example, doesn’t just replace jobs. Healthcare providers are increasingly relying on AI products for medical triage to decide which patients to see first. Free legal representation, a guaranteed right in criminal cases, can be replaced by a lawyer using a chat bot. All of this potentially lowers the quality of these services. And introduces bias into these systems. Artists and other creatives, meanwhile, are struggling to make ends meet as AI generators, sometimes trained on their own work, are used as a cheaper, faster alternative. And then there’s how large language models are disrupting our whole information ecosystem. 

Alex Hanna:  There’s a metaphor we use in the book, the idea that information is being output from these models and results in information ecosystem spills, like toxic spills that really can’t be cleaned up. There’s not really a reliable way to detect synthetic text. And so you’re having to deal with and navigate and try to understand whether something on the internet is actually reflective of truth claims that are being made and perhaps researched more deeply by human individuals. 

Morgan Sung: You’ve written that the strongest critiques against AI boosterism come from black, brown, poor, queer, and disabled scholars and activists. Can you talk about some examples of these critiques and why these groups specifically are so uniquely positioned to make them? 

Alex Hanna:  So we wrote about that in the register of thinking about the ways in which systems, in here, I want to say data-driven systems, not just large language models, but even different systems just don’t work for black, brown communities, queer, and trans people, and then people like refugees and people on the move. The kind of pioneering work of Drs. Temnit Gebru and Joy Buolamwini in their paper Gender Shades talks about facial analysis systems, specifically the way that facial analysis systems do very poorly on darker-skinned women and that there’s a huge delta between darker-skinned women and lighter-skinned men. Sasha Costanza-Chock talks about how tools like TSA scanners do very poorly on trans people. Typically flagging genitals as anomalies or chest areas as anomalities, and then the kind of disparities of how systems talk about women. So there’s been a few papers talking about the ways in which different tools, in this case a word embedding space, makes associations between people and occupation. So, man is to doctor, women is to… typically, the completion is nurse, so it makes presuppositions of this. 

Alex Hanna:  All of this stuff effectively happens in large-language models [laughter] and happens in image generation models as well. There’s some great research by the Bloomberg data team that shows that if you input something like a nurse, uh, typically or a housekeeper, it outputs a kind of a phenotypically looking darker-skinned woman.  If you type in CEO, white man. And so those kinds of elements are the bias element of it. 

Emily M. Bender:  Ruha Benjamin sums it up really nicely in this beautiful essay called The New Artificial Intelligentsia that appeared in the LA Review of Books in 2024. And she’s talking about these ideas of transhumanism and merging with the machines. She says this zealous desire to transcend humanity ignores the fact that we have not all had the chance to be fully human. My interpretation of what she’s saying is that the people that society does not accord full humanity to have a very different experience of technology, both in the ways, as Alex is saying, it’s being used on them, in the ways that doesn’t work well for them and just in the way that it intrudes on their life. And so people who have the privilege of not experiencing any of that tend to be less sensitized to what’s going on and to have a less informed perspective. 

Morgan Sung: And this less-informed perspective encourages AI boosters, who continue to fuel the hype machine. This means investing in and launching new products at a breakneck pace, often overlooking the real-world impact. The MIT Technology Review recently reported that generating one 5-second AI video uses about 3.4 million joules, the equivalent of running a microwave for over an hour. At scale this amount of energy consumption is devastating for the environment. And running all of this comes at a steep price for AI companies, too. 

Like we talked about earlier, OpenAI’s Sora app is proving to be wildly expensive, with more users generating videos than actually watching them. And after the copyright fiasco and subsequent new guardrails, it seems like some initial adopters are already moving on. Can the hype machine sustain this kind of frenzied investment with such limited return? 

Okay, we’re opening one last tab. Is the height machine breaking? 

Do you think the AI hype bubble is going to burst? I mean, like, are there economic critiques? You’ve heard the social ones, but is there anything pointing to the AI height bubble possibly at least deflating? 

Alex Hanna:  Yeah, well, the problem is that there’s so much capital expenditure going into building things like data centers, and they’re going into these massive data center build out where, you know, the kind of projections and how much OpenAI, Microsoft, Google, Amazon, and Meta are spending on this all is astronomical. I mean, hundreds of billions of dollars, just some of the largest technological infrastructure projects that we’ve ever seen. 

At the same time, OpenAI again, the company that has the most queries to Chat GPT, people using most of its products, is making revenue on the order of maybe $10 billion a year. So it’s just orders of magnitude less. And the kind of metaphor that’s being used as well, we have to build the railroads first, and then once the rail roads get going, we can put rail cars in them. But that metaphor doesn’t work at all. People are already using the product. And, you know, companies are already saying, we’re not getting a lot of value out of this. You know, there was an, something that was coming out of MIT, which said 95% of companies just haven’t really gained value from quote unquote AI. So what’s happening? This is very bubble shaped, you now, and I don’t know how the story ends, but it’s very alarming that these four to seven companies are propping up the US and world economy right now, so what happens when the bubble deflates or bursts, it’s not going to be good. 

Morgan Sung: Like you said, um, you finished this book in September 2024. The AI industry has only grown since then. What have you learned about the state of the AI hype machine from the reception to your book? 

Emily M. Bender:  I would say what I’ve learned the most about is about the resilience of people and the importance of connection and community. So the antidote to the hype is a variety of things, one is ridicule as praxis, as we say in the book, and also solidarity and labor movements, but also just sort of connection. And one form of that connection is that there’s a lot of people who are, who feel isolated in a workplace or a social circle where everyone around them seems just completely gaga for this technology and they’re the odd one out. And so one of the joys of both our podcasts and this book has been to find those people and be found by those people who say, oh, so glad I’m not the only one. And then they can form community with other people who have the same reaction and I think that that is super important. 

Morgan Sung: One of the things we grapple a lot just like within Close All Tabs is where to draw the line with AI use, you know. And again, that’s complicated. What is AI? For example, we don’t use ChatGPT, but we use an AI transcription tool for our interviews. Are there conditions under which using large language models, AI tools, are reasonable or justified, appropriate? And then what’s your message to the average listener who maybe uses ChatGPT in their daily but they’re not necessarily AI boosters and not necessarily AI doomers. 

Emily M. Bender:  Yeah. Um, so to the first question, I would say I never call it an AI transcription tool. I would say automatic transcription, right? And that is a use case where, you know, you want to look at the labor conditions of the people who produced it, where the training data come from. And it’s also a use case where you are well positioned to check the output and see if it’s working well for you, right. You’ve got something that has been recorded, you’ve got an automatically produced transcript, you’re presumably going through and correcting it. And if it is wrong all the time, or if you have one that is particularly bad for non-Anglo names, for example, you might start looking for something that’s better. So that is a case of automation that I think can be okay. You still want to look into who produced it. Are there privacy implications? Can I use this tool without uploading my data to somebody else and so on? But there’s reasonable uses and reasonable ways to produce automatic transcription. 

If we’re talking about chat bots of the form of ChatGPT, I don’t see reasonable use cases there. And partially we know that the labor and environmental costs are extraordinarily high, that this is not produced ethically. But even setting that aside, every time you turn to ChatGPT for information, you’re cutting yourself off from important sense-making. 

One of the examples I like to use, if you think about an old fashioned search engine that gave you back, you know, the 10 blue links and you’ve got a medical query, what might come back in those links is a link to, you know something like the Mayo Clinic and then your regional university medical center, so in the Bay area, you know UCSF. And you might get a link to Dr. Oz’s page and you might get a link to a discussion forum where people with the same medical questions are talking to each other. And you can then look at those and understand the information that’s there based on what you know about the Mayo Clinic and UCSF and Dr. Oz and discussion forums. But that also helps you continue to update what you know, about those kinds of sites. 

Whereas if you asked a chatbot and you got back something that was just sort of some paper mache made up out of some combination of what’s in those sites, you not only don’t know how to contextualize what you’ve seen, but you’re also cut off from ability to continue to understand the information environment. And then very importantly, if you think about that discussion forum, any given, you know, sentence from that discussion forum interpreted as information, you’re going to want to take with a big grain of salt. But the chance to connect with people who are going through the same medical journey is priceless. And there’s a, the scholar Chris Gilliard describes these technologies as technologies of isolation. And I think it’s really important to think about anytime you might turn to a chat bot- what would you have done three years ago? What would you have done when ChatGPT was not in your world and what are you missing out on by not doing that now? The connections that you would make with people, the ongoing maintenance of relationships, the building of community, the deeper sense of what’s going on in the world around you, all of these are precious and I think not to be thrown away for the semblance of convenience. 

And then I think the final thing that I would say is look out for, identify, and reject the inevitability narrative. So the tech companies would like us to believe that AI is the future, it’s definitely coming. Even if you don’t like it, you have to resign yourself to it. And you’ll get people saying, well, it’s here to stay, we have to learn what to live with it. And I refuse that. I say that is also a bid to steal our agency because the future is not written. 

Morgan Sung: Those are all my questions. Thank you so much for joining us.

Emily M. Bender: Yeah, thank you. 

Alex Hanna: It was a pleasure. 

Morgan Sung: Let’s close all of these tabs. 

Close All Tabs is a production of KQED studios and is reported and hosted by me, Morgan Sung. This episode was produced by Chris Egusa and edited by Jen Chien. Close All tabs producer is Maya Cueva. Chris Egusa is our senior editor. Additional editing by Chris Hambrick and Jen Chien, who’s KQED’s director of podcasts. Original music, including our theme song and credits by Chris Egusa. Additional music by APM. Brendan Willard is our audio engineer. 

Audience engagement support from Maha Sanad. Katie Sprenger is our podcast operations manager and Ethan Toven-Lindsey is our editor in chief. Some members of the KQED podcast team are represented by the Screen Actors Guild, American Federation of Television and Radio Artists, San Francisco, Northern California Local. 

This episode’s keyboard sounds were submitted by my dad, Casey Sung, and recorded on his white and blue Epomaker Aula F99 keyboard with Greywood V3 switches and Cherry Profile PBT keycaps. 

Okay, and I know it’s a podcast cliche, but if you like these deep dives and want us to keep making more, it would really help us out if you could rate and review us on Spotify, Apple Podcasts, or wherever you listen to the show. Follow us on Instagram at CloseAllTabsPod, or TikTok at Close All Tabs. Thanks for listening.

 

Sponsored

lower waypoint
next waypoint