upper waypoint

While America Dithers, Europe Gets Busy Crafting Artificial Intelligence Regulations

01:54
Save ArticleSave Article
Failed to save article

Please try again

European regulators are working to build international consensus for what they call “human-centric AI.” Regulators in the U.S.? Not so much. (Getty Images)

Who’s responsible when a driverless car runs over a pedestrian? Or a social media platform steers housing ads away from African Americans? These real-world outcomes driven by artificial intelligence are cropping up before politicians and regulators can get up to speed. At least, here in the U.S.

The Future of Tech

The European Commission isn’t rolling out any laws governing artificial intelligence right away, but it looks likely it will eventually.  

"As is the case with all technologies, AI raises a number of concerns that need to be tackled. If we are not vigilant, the use of AI may lead to undesirable outcomes — intended or not," wrote Pekka Ala-Pietilä, the chair of the EU’s High-Level Expert Group on Artificial Intelligence, in a recent statement on the European Commission website.

The group collected ideas from 50-plus experts and issued a report last December. This week, after receiving more than 500 comments in response, the group issued ethics guidelines for trustworthy artificial intelligence. Here are the "key requirements" for assessing AI:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

"One thing that jumped out at me was the sixth principle, which is societal and environmental well-being," said assistant professor Daniel Susser, who specializes in technology and ethics at Penn State. He’s excited that Europeans are starting to ask the tough questions about what we want companies like Google and Amazon to be responsible for as they dream up potential futures for us.

"Regulating, especially regulating technology, is generally thought of as an attempt to articulate what companies should not do," Susser said. "We also want to articulate a positive notion of what sort of world we want technology to be helping us build — and I think that will need to [hashed] out more specifically, but I’m happy to see it in there."

Next steps include a pilot program to develop draft rules to roll out this summer. Whether the Europeans inspire U.S. politicians and regulators to follow suit is unclear. Americans have been singularly slow to consider a wide range of regulation issues around technology. Facebook CEO Mark Zuckerberg has repeatedly apologized to federal lawmakers for various failings of the social media giant, but there are no laws to force change from his company and others in Silicon Valley.

Sponsored

Meanwhile, one 14-year-old committed suicide in Britain inspired by what she saw online, and UK regulators proposed new laws to slap penalties on social media companies and technology firms if they fail to protect users from harmful content.

Experts say American companies driving much of the world’s AI won’t be able to ignore what happens in Europe.

Experts also say no regulation is not an option we want to accept. Private industry has encountered blowback over its attempts at self-regulation. As Yoshua Bengio, a computer scientist and winner of the 2018 Turing Award for his work on deep learning, told the journal Nature:

Self-regulation is not going to work. Do you think that voluntary taxation works? It doesn’t. Companies that follow ethical guidelines would be disadvantaged with respect to the companies that do not. It’s like driving. Whether it’s on the left or the right side, everybody needs to drive in the same way; otherwise, we’re in trouble.

lower waypoint
next waypoint
Stunning Archival Photos of the 1906 Earthquake and FireWhy Nearly 50 California Hospitals Were Forced to End Maternity Ward ServicesSan Francisco Sues Oakland Over Plan to Change Airport NameDemocrats Again Vote Down California Ban on Unhoused EncampmentsFederal Bureau of Prisons Challenges Judge’s Order Delaying Inmate Transfers from FCI DublinFirst Trump Criminal Trial Underway in New YorkCould Protesters Who Shut Down Golden Gate Bridge Be Charged With False Imprisonment?Jail Deaths Prompt Calls To Separate Coroner And Sheriff's Departments In Riverside CountyDespite Progress, Black Californians Still Face Major Challenges In Closing Equality GapThe Beauty in Finding ‘Other People’s Words’ in Your Own