Ericka Cruz Guevarra: I’m Ericka Cruz Guevarra and welcome to the bay. Local news to keep you rooted. Okay. I got ChatGPT open here. Do you think A.I. is good or bad?
ChatGPT: A.I. itself isn’t inherently good or bad. It depends on how it’s used. The key is to manage and regulate A.I. to maximize benefits while minimizing harm. What are your thoughts on that?
Ericka Cruz Guevarra: I kind of feel like that was like a politician answer. It’s kind of creepy how good you are at that.
ChatGPT: I get what you mean. Do you have any specific worries or hopes about A.I.?
Ericka Cruz Guevarra: Well, I guess I worry that maybe we’re not regulating the technology as fast as we should, given how quickly A.I. technology is growing.
ChatGPT: That’s a valid concern. The pace of A.I. development often outstrips the speed of regulatory measures. Many people worry about this gap. Governments and organizations are trying to catch up, but it can be challenging. What aspects of A.I. regulation? Where are you the most?
Ericka Cruz Guevarra: I mean, I feel like you just said it, but here we are. California is in the middle of deciding exactly how it wants to regulate artificial intelligence. It’s also getting help from the European Union, which just passed some of the strictest A.I. regulations anywhere. Today, how California is trying to regulate A.I. and how this technology could affect all of us.
Ericka Cruz Guevarra: Khari, let’s start very, very basic here. When we’re talking about air in this conversation, what exactly are we talking about? What kind of technology?
Khari Johnson: We’re talking about a system that has the ability to make predictions, recommendations or decisions.
Ericka Cruz Guevarra: Khari Johnson is a tech reporter for CalMatters.
Khari Johnson: That system is created through training data and that sort of used as the input in those recommendations or decisions or the output. Those decisions or recommendations or predictions can determine what, you know, shown to you on Netflix or on Spotify. But they can also determine more important things about your life.
Khari Johnson: Those systems can determine whether or not people will receive benefits. You know, businesses can use them to determine whether or not to extend your line of credit or rent you an apartment or mortgage. So there’s lots of different ways that it can show up.
Ericka Cruz Guevarra: I mean, what companies and institutions here in the Bay Area in particular have the biggest stake in the growth of this technology?
Khari Johnson: I’d say the stake for AI in the bay areas is just massive. The Bay Area outraised the rest of the world in AI investments. Quite often the Bay Area makes large contributions to the AI research community globally.
Khari Johnson: You know, the generative AI advances that really started with GPT three in fall of 2022 come from OpenAI, which is based in San Francisco. Nvidia is one of the largest companies by market cap in the world today and they’re here in the Bay Area. And Google and Apple are of course big parts of the conversation as well and.
Ericka Cruz Guevarra: In universities as well, right?
Khari Johnson: Absolutely. Stanford and UC Berkeley are a big part of the AI research community globally. You know, the machine learning community really looks to both of those schools in particular.
Ericka Cruz Guevarra: I mean, Khari, we’re talking about a really rapidly growing technology. What have the reactions been to this growing industry? I feel like I’ve seen a lot of excitement, but I’ve also seen a good amount of fear and concern as well.
Khari Johnson: You know, I think there are companies out there that would say that there are shortages in some places and that it’s helpful to have A.I. there to fill in the gap, you know, that it might improve people’s lives in different ways. There’s been a series of surveys of people’s feelings about artificial intelligence.
Khari Johnson: There is a Reuters one out last year that felt that I might have something to do with human extinction at some point. I think there’s been some studies that would show that, you know, the fear surrounding that might be a bit ahead of, you know, what people are concerned about.
Khari Johnson: But, you know, I think the it prompts fears of job loss that I worry most about people who are entry level or somebody that might apply for an internship and the sort of tutelage that they might have received a decade ago, you know, a generation ago might just disappear in a space where somebody says, we’ll catch up to you can do it for me instead. I worry about that a lot.
Khari Johnson: And there definitely are forms of this technology that are capable of doing things that might improve people’s lives. And there definitely are the instances where there might be bias within the model based on the data that it was, you know, that was used to train it.
Khari Johnson: But also sometimes it’s just snake oil, sometimes it’s just pseudoscience wrapped up in, you know, marketing and technology. And you can’t tell the difference because, you know, maybe you don’t have the immediate know how to tell that and then look under the hood and say, hey, wait a minute, there’s no there there.
Ericka Cruz Guevarra: I wonder, too, how leaders are reacting to the rapid growth of A.I..
Khari Johnson: I think there are some leaders who want to say artificial intelligence because they want to be associated with the word and see it in a headline and do the simple math of if they’re associated with a topic that a lot of people are talking about, then that might be beneficial to them. I think people should be wary of that. I know that lobbyists from large companies are busy in state houses.
Khari Johnson: More than 40 states are considering draft rules to regulate artificial intelligence. And I’m sure there are conversations that are taking place with business groups and industry to inform or influence those bills. But essentially, Congress hasn’t done much. States are attempting to regulate the technology in order to keep people safe. And so that’s what California is certainly doing.
Khari Johnson: And there’s a handful of bills that are under consideration that will be deliberated within the next few weeks, really to see how the state responds. And in doing so, there’s been some extensive conversations with members of the European Union, which just completed a long process to determine just how to regulate artificial intelligence.
Ericka Cruz Guevarra: Yeah, let’s get into that. You reported on this story on how the European Union is concerned about regulating AI and how it’s hoping California will adopt similar regulations. I mean, how would you characterize the European Union’s response to regulating AI, maybe even in comparison to the U.S.?
Khari Johnson: I think it’s fair to say that the European Union started regulating artificial intelligence well before we did in the United States, and they’ve completed a major comprehensive framework for that work on the European Air Act, started around 2017, 2018, and I believe concluded a couple of weeks ago. It’s a risk based approach to artificial intelligence regulation.
Khari Johnson: And so it requires governments to define certain uses of the technology as high risk. And those types of models are subject to higher levels of testing. And it’s the first comprehensive framework of its kind. I think it’s important to point out that the European Union looks at things like predictive policing as something that shouldn’t exist in society.
Khari Johnson: European Union bans emotion recognition at schools and in the workplace, and social credit scores like the kind that they use in China to reward or punish certain kinds of behavior. And those are exempt.
Khari Johnson: Other examples of these are the technology that they arrived at, the decision that certain forms of it deserve a ban, not just strict limits. But this should not exist in our society. And we have to deliberate on the same questions as well.
Ericka Cruz Guevarra: Coming up, how California and the EU are working together to regulate A.I.. Stay with us.
Ericka Cruz Guevarra: The EU has attempted to get California to pass similar regulations. How has the EU tried to do that? How have they tried to rope California in?
Khari Johnson: Well, I’d say that the European Union has been active in conversations with legislators, and one of the more recent meetings took place between Senator Tom Hamburg from Orange County, who’s the chair of the Senate Judiciary Committee.
Khari Johnson: Assembly member Rebecca Bauer-Kahan, who is the chair of the Privacy and Consumer Protection Committee and author of a 2930 a bill that would empower the Civil Rights Department within the state of California to demand businesses and state agencies respond to requests for impact assessments of A.I. models in order to attempt to prevent automated discrimination.
Khari Johnson: Senator Scott Wiener, who is author of a bill, puts stricter rules on the developers of foundation models or advanced A.I. models. And Assembly member Buffy Wicks offer AB 3211, which would require large online platforms to place watermarks on air generated content so people don’t get their belated or fall victim to misinformation.
Gerard de Graaf: Well, I visit Sacramento probably about once a week, at least 2 or 3 times a month. I mean, it’s not been as hot this year as it is today. So.
Khari Johnson: Gerard de Graaf, the director of the San Francisco office and the EU tech envoy to the United States met with all of these lawmakers in April and continues to be in conversation with them.
Gerard de Graaf: There’s a fruitful, constructive discussion going on between the EU and the state of California. I mean, you have the California effect in in the US, we have the Brussels effect globally. I mean, if the California effect can meet the Brussels effect, we can set the standard for the world democratic kind of.
Khari Johnson: It’s something that, you know, the European Union, in order to help businesses comply with the law, to coordinate and have a uniform approach, can get us closer to a point where a California and EU standard is becomes more of a global standard.
Ericka Cruz Guevarra: I mean, what should we make of these biggest series of bills in California, Carrie? Like, would you say that these together make for a pretty strong regulations on AI, especially compared to the EU? Or are they are they much weaker?
Khari Johnson: You know, in the interview that I had with Sherrod in in April, he mentioned that between the three, those bills would encompass the majority of the principles that you can find within the AI Act. But, you know, I think we got to wait basically until August to answer that question.
Khari Johnson: I think there’s going to be a series of amendments potentially within the next couple of weeks and months that will let us know just how strong or weak it is. But, you know, it’s it’s something that I think Senator Umberg mentioned in an interview with KQED that getting it right is a balance between protecting innovation for the series of companies here in California and the Bay Area that are building A.I.. And, you know, California is a center of this industry.
Tom Umberg We could get this wrong. And if we get it wrong and we bear on the side of allowing too much risk, the consequences could be catastrophic. And if we ere on the side of being too restrictive, we could strangle the A.I. industry. And in all places like San Francisco, that’s absolutely critical.
Ericka Cruz Guevarra: What are the stakes, I guess, of getting this right? How how important is it that we figure out the right regulations for AI as soon as possible?
Khari Johnson: It really is a matter of being able to protect yourself, protect your family, protect your community. The AI is a technology that can do things that seemed impossible before, you know, but it can also concentrate power. It can enable human rights abuses and it can do all of these things without their knowledge.
Khari Johnson: The risk of that happening falls largest on our most vulnerable people in our society. And I think that that’s the big takeaway for people is, you know, the consequences of of people’s, you know, essential liberties and human rights are at stake.
Ericka Cruz Guevarra: Khari, thanks so much for joining us.
Khari Johnson: Absolutely. Thanks for having me.
Ericka Cruz Guevarra: That was Khari Johnson, a tech reporter for Cal Matters. This 40 minute conversation with Khari was cut down and edited by Tamuna Chkareuli. Ellie Prickett-Morgan is our intern, they scored this episode and edited out the tape. Alan Montecillo is our senior editor. Additional production support from me Music courtesy of the audio network.