On June 17, software engineers at Google posted images on the company’s research blog that sent the internet into a tizzy. When fed images of pure static, Google computers trained on image recognition output psychedelic bonanzas of swirling color, floating pagodas -- and hybrid creatures part fish, part pig and part corgi.
As of today, the original post has 2,324 comments, ranging in mood from the contemplative ("Beyond the eye candy, there is actually something deeply interesting in this line of work”) to the panic-stricken (“Should I be scared?”).
The research team released the code used to create these visualizations in a follow-up post on July 1 so others could play with what they’re calling "DeepDream" -- computers dreaming of and rendering all manner of electric sheep.
It wasn’t long before a number of sites popped up to let us non-coders simply upload images and wait for the deeply disturbing output. One of those, Dream Deeply, even lets you choose between “a nap” and “a night’s sleep.” It essentially determines how long an image runs through the system's feedback loop, becoming more and more divorced from its original content. Now there’s even a Reddit thread related to the project.
So what’s really going on here?
Google’s engineers created the visualization tool to try and understand what’s happening inside their artificial neural networks -- meaning, roughly, computers that mimic the way a tangle of neurons fire in human brains.
So what's with all the dogs, bugs and eyeballs? When over-analyzing the input, the networks seek out the images they were trained to identify. Some networks, like MIT's, have been trained on places. Others, animals. The most popular sites providing uploading and processing services to the public right now seem only to understand dogs, pagodas, eyeballs, slugs and more dogs.
Most people uploading their images to sites like deepdreamr, Psychic VR Lab or Dream Deeply opt for selfies, which unfailingly yield grotesque portraits with a few too many eyeballs. For those who upload landscapes, DeepDream seems to want to fill bucolic, uninhabited spaces with creatures, structures and vehicles. The neural network doesn't lack "understanding," per se, but these images demonstrate a system running rampant, doing what it was trained to do to a ridiculous degree.
Humans are all subject to pareidolia (finding significance in images or sounds, like seeing faces in trees and animals in clouds) and here humans have trained neural networks to look for significance in what to them is a field of random, multi-hued information.
What deeper truths or predictions can we infer from this bewildering glimpse into an artificial mind?
To find out what the future holds in a computer's eyes, I uploaded a still from the 2013 sci-fi blockbuster Star Trek: Into Darkness to Dream Deeply and told it to get a "full night’s sleep." For those of you out of the loop, this is San Francisco in 2259 (aka Starfleet headquarters).
While the city is filled with majestic golden-domed towers, apparently we haven’t abandoned our cars. Instead, we've chosen to merge with them to become four-wheeled, blue-suited creatures that will surely give me nightmares for weeks to come.