Sponsor MessageBecome a KQED sponsor
upper waypoint

San Francisco Blackouts Raise Concerns about PG&E and Robotaxis

Save ArticleSave Article
Failed to save article

Please try again

A view of traffic from above as more than 130,000 PG&E customers were without power in San Francisco, California, United States on December 20, 2025. (Tayfun Coskun/Anadolu via Getty Images)

Airdate: Wednesday, January 7 at 9 AM

A blackout that left one-third of San Francisco customers without power – some up to three days – was one of six outages that plagued PG&E throughout the holidays. Disabled Waymos blocked streets. The Nutcracker was cancelled. Restaurants and businesses were closed. Customers and politicians are demanding answers and calling for the end of PG&E’s monopoly. We’ll talk about the blackout and what it can tell us about the reliability of  our power sources and Waymo’s vulnerabilities, and we’ll  hear how it affected you.

Guests:

Joe Eskenazi, managing editor and columnist, Mission Local

Jeffrey Tumlin, former Director of Transportation, San Francisco Municipal Transportation Agency (SFMTA)

Brad Templeton, entrepreneur, writer; Templeton is the chairman emeritus of the Electronic Frontier Foundation and previously worked at Waymo

Bilal Mahmood, supervisor, District 5, Board of Supervisors San Francisco

Sponsored

This partial transcript was computer-generated. While our team has reviewed it, there may be errors.

Alexis Madrigal: Welcome to Forum. I’m Alexis Madrigal. Last month, there was a blackout in San Francisco seen around the world. While losing power is bad, it’s also pretty common. What was unusual was that there were countless Waymos stranded across the city.

The incident made global news, and it highlighted two realities about San Francisco. The city is vulnerable both because of old infrastructure maintained by PG&E and also because of a novel technical system — the autonomous vehicle fleet deployed into the city by Waymo.

The incident raised a lot of questions about how prepared the city is for a real, serious emergency — or, as one Carnegie Mellon computer science professor who works on autonomous vehicles put it: if you get a response to a blackout wrong, regulators are derelict if they do not respond to that by requiring some sort of proof that the earthquake scenario will be handled properly.

Up first this morning to discuss what happened, let’s bring in Joe Eskenazi, managing editor and columnist at Mission Local. Welcome, Joe.

Joe Eskenazi: Yeah. Welcome. Happy New Year.

Alexis Madrigal: Thank you. So the holidays do feel like a long time ago already, but tell us what happened. There was a series of blackouts, right? It wasn’t just this one — though this one made the headlines.

Joe Eskenazi: The analogy that I probably use too much is the old Sesame Street baker who reliably fell down the stairs and dropped all the stuff he was carrying. And it did feel like that after a while, because there was, you know, a concatenation of blackouts.

The big one — or another analogy, if you like — was the blackout and then its aftershock blackouts. The big one came December 20th and was not fully remedied for at least two days. That knocked out power to about one-third of San Francisco, ostensibly due to a substation fire. But there were several after that, and it got to the almost periodic point where a planned blackout was superseded by an unplanned blackout a couple of days later.

Alexis Madrigal: I mean, what’s going on? When I was looking into that substation that had the fire, it looked like there had actually been problems there a bunch of times over the last few decades. But it seems like it must be more widespread than that, given that they weren’t able to get things under control.

Joe Eskenazi: You know, I was a college radio guy myself, and oftentimes the problem came down to one button not being pushed. An alarming amount of damage can be done by just one small mistake. The famous New York City blackout of 1967 was caused by one device failing at a Niagara Falls power plant many miles away, which overloaded a wire and caused a chain reaction.

So it wasn’t quite as dramatic as all that — knocking power out to the entire Northeast and Canada — but it doesn’t take much. And when fire is involved with electronics, it’s not great.

Alexis Madrigal: Yeah, there were also these kinds of knock-on effects for people in San Francisco. They had to bring in huge generators, which then made all this noise locally in the neighborhood. When were things actually back to normal? Did it take until the new year?

Joe Eskenazi: For some people, it did take a great deal of time. And the term “back to normal” is broad, because if you lost all your food, if you lost your business as a business that couldn’t process credit cards and lost its supply of perishables during—

Alexis Madrigal: —the most important retail week of the year, perhaps.

Joe Eskenazi: Yeah. It’s not easy to be a San Francisco small businessperson — and that doesn’t help.

Alexis Madrigal: One of the consequences, somewhat unanticipated — certainly by the company itself — was these Waymos that got stuck. It wasn’t initially clear exactly why that happened. We found out subsequently what was going on there. Well, first of all, maybe they should put a damper on talk of flying cars. But—

Joe Eskenazi: So we’ve all seen videos of Waymos misbehaving. I’ve written a lot about Waymo, and I will defer to Jeffrey Tumlin to my right on matters of transit. But I’m willing to concede, even if the data is incomplete, that a computer will drive better than a human being who has human failings and checks their text messages and listens to ball games and does all those sorts of things.

However, we’ve seen patterns where there are problems with the technology. And oftentimes the discussion becomes a dichotomy of: “They’re safer than humans — what are you gonna do?” versus, “There are problems we could fix, and they could be better still.” And the regulatory environment leaves much to be desired.

Alexis Madrigal: Yeah. I mean, the thing I’ve always thought about is that even if they end up being safer in the ultimate sense, there will be new types of mistakes — because we’re used to the mistakes humans make, but theirs will be—

Joe Eskenazi: —different. They’re certainly different. And I can tell you, as someone who rides a bike around the city, this machine will not chase you down and yell and scream and hold grudges. But on the other hand, it does not react like a person, and you cannot make eye contact with it.

And as you’ve seen in the case of running over a cat or a dog — or in the Cruise case several years ago, a person — there is imperfection in knowing when there are objects under the car.

Alexis Madrigal: Yeah. And I think that just goes to it. The lidars allow them to have a greater and more precise view further away, but there’s not anything under the car to have a sensor. There’s nothing under the car to sense anything.

Let’s bring in Jeffrey Tumlin, former director of the San Francisco Municipal Transportation Agency — the SFMTA. Welcome, Jeff.

Jeffrey Tumlin: Thank you. Always glad to be on KQED.

Alexis Madrigal: Glad to have you back. Were you surprised by what happened to the Waymos? Had this ever come up in your discussions with them? You’ve been a critic of Waymo at times — what happens when the power goes out?

Jeffrey Tumlin: This was not a surprise. This is a set of questions the city has been asking all autonomous vehicle companies for the last six years — particularly in an earthquake scenario when not only the power goes out, but the cellphone grid goes down as well, roads are blocked with rubble, gas lines have broken, the city is on fire, and we need to get emergency response vehicles quickly to all points of the city in order to put out those fires.

What do Waymo vehicles do? And the answer to date has been: we come to a complete stop wherever we happen to be, and we wait for rescue. And that has profound unintended negative consequences.

Alexis Madrigal: Yeah. I mean, in most cases, for an individual vehicle, that’s clearly the right thing for them to do. But at a system level, with the whole fleet across the whole city, that’s where these second-order consequences come in.

Jeffrey Tumlin: Right. And it gets to the question of scaling. So first of all, for humans, the expectation is you go to the minimum-risk condition — which includes not only the risk to you and your passengers, but also to other roadway users. So when something bad happens or you get confused on the road, you don’t just come to a stop in the middle of an intersection.

Alexis Madrigal: Yeah. Hypothetically, the right thing to do — the goal — is to get out of the way.

Jeffrey Tumlin: For a variety of reasons that Brad Templeton can speak to better than I can. But in part because in the AV world, we have defined safety as collision avoidance. And if your goal is merely collision avoidance, coming to a complete stop wherever you happen to be is the optimal safety solution — because of how we’ve defined safety.

And this is one of the things that I’ve been talking about for six years as a concern. Safety is very complicated. When you look at shared scooters or trains or buses, we have dozens of definitions of safety that we care a lot about. We collect data on them, and we share that data transparently with the public.

Alexis Madrigal: Brad Templeton, let’s bring you in since you have been name-checked — entrepreneur and writer, formerly worked at Waymo. Welcome, Brad.

Brad Templeton: Good to be here. And I’m sorry — I just missed the last two minutes because of my wonderful hotel Wi-Fi.

Alexis Madrigal: Yeah — well, hopefully we’ll keep you for the next two minutes at least. The question is, Brad: were you surprised by what happened to the Waymos in San Francisco? Did you think this scenario would have had a solution, or had been contemplated by Waymo?

Brad Templeton: I was surprised. I think many people were surprised, because we know that Waymo actually does try. They have teams of people who look to say, “What’s the worst that could happen in various circumstances?” They build simulations of bad things going on on the roads and in the city.

And obviously they didn’t do enough in this circumstance, and they were surprised by the number of requests they would get from their vehicles — and they ran out of people to handle special situations.

My understanding is that Waymos treat a traffic light that’s malfunctioning as a four-way stop — which is what the law actually requires. And from the very beginning — I worked at Waymo a long time ago now — one of the first realizations was that you can’t just drive timidly as your first instinct when you’re focused on collision avoidance. You have to assert yourself at a four-way stop.

So I think they did that as they normally would. Now, you know who doesn’t know the rules of four-way stops? It turns out we humans don’t know the rules at four-way stops and dead traffic lights in the same way.

And what happened, as far as I can piece together, is that after a while, the human drivers just started doing a lot of weird things — going through the lights, not really following the rules you’d expect. And this did what, again, you’d want — the vehicles say, “The humans have all gone crazy. I’d better phone home to mama.”

And the problem is that they all phoned home to mama at the same time — and they got, “Sorry. Your call is very important to us. There are 23 other cars waiting in line.”

Alexis Madrigal: But Brad, isn’t this just part of the system? If the nature of the beast is that when they get in trouble they phone home, then isn’t it baked into their emergency response that they’re going to surge beyond the capacity of the system?

Brad Templeton: So that is a mistake they made. They didn’t realize they would surge in a circumstance like this.

Here’s the really good news about this — and people think, “Wow, this is a good story.” The good news is that robots don’t make the same mistake twice — or at least their programmers, once they learn a lesson, fix it.

And so we had this happen in a circumstance where, as far as I can tell, nobody was really in any danger. Traffic was gummed up a little bit — but I think we can tolerate a little bit of that to learn really important lessons, so that when something like the Big One comes, they won’t be surprised by this.

They’re going to make mistakes tomorrow, and they’re going to make mistakes for the rest of their existence. And it’s not the mistakes you judge them on — it’s how they adapt to them, how they improve, and whether they’re on a positive trend toward the goals that we have for transportation in our cities.

Alexis Madrigal: Well, maybe we judge them on both things, actually.

We’re talking about recent blackouts in San Francisco — including one that left a third of the city without power and disabled some Waymos. We’re joined by Brad Templeton, entrepreneur and writer; Jeffrey Tumlin, former director of the SFMTA; and Joe Eskenazi, managing editor and columnist with Mission Local.

We want to hear from you too. What was your experience of this? Did it change your mind or make you think differently about PG&E or the Waymo fleets in our cities?

You can give us a call. The number is 866-733-6786. How do you think PG&E did restoring power? 866-733-6786. Forum at KQED.org. And of course, find us on social media — BlueSky, Instagram, etc. We’re KQED Forum.

I’m Alexis Madrigal. Stay tuned for more right after the break.

Sponsored

lower waypoint
next waypoint
Player sponsored by