Techno-utopianism, an ideology based on the premise that advances in science and technology will eventually bring about a utopia, has been a part of Silicon Valley’s DNA since the hippies from the LSD-fueled Acid Test days got into computers and online networking.
Stanford professor Fred Turner has dedicated a whole book to tracing how the area’s pro-technology ideology grew out of the counterculture and back-to-the-land movements of the late 1960s. Since then, internet companies have benefited from a belief that technological advancements are inherently positive for society, and that technology is neither good nor bad, just a tool.
Today, some of those who have worked in the tech industry are challenging this idea and criticizing the tools they helped create. Former Google ethicist Tristan Harris and technology pioneers like Jaron Lanier are calling attention to problems like social media echo chambers, digital addiction, personal data collection and the spread of false information.
Guillaume Chaslot is one of the latest former employees speaking out. Chaslot began working for YouTube in 2010. These days, he says he's trying to help people understand how the website recommends videos. Chaslot claims the site’s algorithm often ends up promoting videos filled with falsities and conspiracy theories.
A Bug in the System
Before becoming a programmer, Chaslot studied physics. In college he decided to switch to computer science, believing the advances in artificial intelligence could have positive impacts on humanity
“I went into computer science and did a Ph.D. on the most complex topic I could find," he says. "Which was the game of Go.”
A decade ago, Chaslot says, computers were not sophisticated enough to master the Asian board game. Today, computers can crush even the best human opponents.
Chaslot graduated and got a job at YouTube. In 2010, he began working on algorithms that recommended videos, and at first the experience, he says, was great. He was paid well and worked alongside very smart co-workers. But he also noticed something deeply unsettling.
The platform YouTube was, Chaslot found, filled with videos that supported theories about everything from the earth being flat to vaccines causing illness. Not only was the website a home for this kind of content, Chaslot said, but the algorithms he was working on and the advertising model driving those algorithms were actually promoting the material.
This was an awakening for Chaslot. He began to feel the collective brain power he was attracted to at YouTube was being used to contribute to the proliferation of flat-earth and anti-vaccine videos. So he decided to work on ways to solve the problems.
“I was very optimistic,” Chaslot says. “I was like, 'I am just going to propose solutions.' They would see they could work.”
He says he told his manager that he wanted to develop algorithms that shook people out of filter bubbles and echo chambers of misinformation. But Chaslot says his manager told him it wasn’t a good idea to work on that. He decided to pursue the project anyway. Shortly after that, Chaslot says, he was fired.
In Search of Transparency
After Chaslot was fired, he did some work for some smaller tech companies, but he couldn’t let the issue at YouTube go. So he eventually started AlgoTransparency. It’s a website that shows people what videos Youtube’s algorithms recommend most often. It has sections dedicated to topics like mass shootings, elections and science -- topics that Chaslot says often have problematic videos.
According to his analysis, “Is the earth flat or round?” is one of the searches most favored by the recommendation algorithm. Chaslot said he thinks these kinds of conspiracy videos resonate with some people because they feel like they are using YouTube to access some secret truth the powers-that-be aren’t disclosing.
“If you think that everybody else is lying, you are going to spend all your time on YouTube," he says. "For the algorithm it’s a super-cool win.”
Spending so much time on YouTube is a win for the company, Chaslot says. The more time you spend on the platform, the more advertising revenue is made for the site and those who create the videos -- some of whom may not even believe what they’re making.
“So the incentive is not to produce better-quality content, content that will help people,” Chaslot says, “but content that will help them spend more time online.”
Rewiring the Algorithm
A YouTube spokesperson says the recommendation algorithm has changed since Chaslot left. Now it’s less about maximizing the time people spend on the platform, and instead focuses on how “satisfied” people are by measuring a range of things, including likes, dislikes and shares.
For viewers, though, it’s still a mystery. Chaslot says they don’t know how YouTube analyzes their behavior and uses their behavior to target advertising and content. Chaslot wants to make the inner workings more transparent to the general public.
“Working on Youtube’s algorithm, I realized there are a lot of things that people don’t understand about artificial intelligence,” he says.
AI is all over the internet, Chaslot says, influencing your behavior, manipulating you in ways you don’t realize. It's dictating everything from the posts you see on Facebook and Twitter to the videos recommended on YouTube.
Chaslot is now preparing to analyze videos uploaded around the midterm elections, when YouTube is bound to be filled with political propaganda, videos smearing candidates and all manner of narrators spreading dubious “facts.”