On the first day of Google's annual conference for developers, the company showed off a robot with a voice so convincingly human that it was able to call a salon and book a haircut – never revealing that it wasn't a real person making the call.
The new technology is called Duplex, and its aim is to carry out natural-sounding conversations so that Google's virtual assistant can accomplish tasks over the phone on users' behalf. But the demo showed a product so real-seeming that it's also raising concerns about AI that can purposely fool humans into thinking they're interacting with a real person.
In a blog post describing the technology, Google says Duplex "is capable of carrying out sophisticated conversations and it completes the majority of its tasks fully autonomously, without human involvement." Duplex is slated to start rolling out this summer within Google Assistant, and make appointments and reservations or check business hours – tasks that sometimes require a phone call.
Google says Duplex can only carry out natural conversations after being deeply trained within a specific subject, and that it can't carry out general conversations. (But that didn't stop people from imagining the uncomfortable conversations they'd offload. "Hey Google: Tell Marcus he's the father," Casey Johnston joked at The Outline. "Hey Google: Tell my landlord I'll send the rent uhhh next week.")
While Google wowed developers with the realness of the bot's speech, many observers immediately took issue with how the technology apparently tricked the human on the line.
"Google Assistant making calls pretending to be human not only without disclosing that it's a bot, but adding 'ummm' and 'aaah' to deceive the human on the other end with the room cheering it... horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing," tweeted Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill who studies the social impacts of technology.
"As digital technologies become better at doing human things, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, delilberate deception. Not okay," she added.
Entrepreneur and writer Anil Dash agreed: "This stuff is really, really basic, but: any interaction with technology or the products of tech companies must be exist within a context of informed consent. Something like #GoogleDuplex fails this test, _by design_. That's an unfixable flaw."
The engineers who designed Duplex address the issue only vaguely in the company blog post. "It's important to us that users and businesses have a good experience with this service, and transparency is a key part of that," they write. "We want to be clear about the intent of the call so businesses understand the context. We'll be experimenting with the right approach over the coming months."
Google vice president of engineering Yossi Matias told CNET that the software will likely tell the person on the phone that they're talking to a virtual assistant — though that didn't happen in the conversations showcased on Tuesday.
The concept known as the "uncanny valley" notes that adults often find it creepy to interact with a robot that seems very nearly – but not quite—human. But Google's new technology pushes that question of creepiness to a new level: What if we don't even know it's a robot we're talking to?
Copyright 2018 NPR. To see more, visit http://www.npr.org/.
For arts stories you won’t read anywhere else, come to KQED’s Arts and Culture desk.