That reads a bit like the ads they used to run in old comic books. But while "mind-reading" may be a little hyperbolic, the technology it describes is indeed on the wondrous side. Plus it made us tear up.
Brain Computer Music Interfacing is the subject of a short film (see above) that will debut at the Peninsula Arts Contemporary Music Festival next weekend at Plymouth University in England.
The violinist referenced in the Telegraph article is one of four severely disabled musicians who took part in a remarkable performance last year. Each of the participants suffers from locked-in syndrome, which is possible to describe as the stuff of which nightmares are made. (If you've read or seen "The Diving Bell and the Butterfly," you'll know what we mean.)
"They cannot talk, cannot move at all," says Eduardo Miranda, who heads the Interdisciplinary Centre for Computer Music Research at Plymouth University. "Some of them move their head a little bit, others communicate only through blinking eyes. That’s the thing that motivated us to do this project, to allow them to regain some sort of communication again through music."
That was accomplished through a device, developed by Miranda and his assistant, Joel Eaton, that reads the electrical signals produced by the visual cortex in each musician's brain. These signals are produced when the musicians pick one square to focus on from a group of four, each filled with flashing lights. Each square appears next to a different musical phrase, and because each flashes at a different frequency, it triggers a distinct brain signal.
"If you have a light flashing at 20hz, another at 24hz, we can detect [which one you're looking at] in the brain signal," Miranda says. In this way, the disabled musicians each choose a snippet of music, written by Miranda. Another group of musicians then sees these choices on their own computer screens, so they can be played aloud. The title of the entire piece, composed by Miranda, is "Activating Memory."
The groups of choices presented to the disabled musicians changed on average about every 10 seconds, Miranda says. Four musicians times four choices meant 16 potential combinations of music each time a new set was presented. Every time the piece is played, says Miranda, it would sound different.
"These combinations will produce sometimes dissonance, sometimes consonance," he says. "That is part of the game, so to speak."
After the performance, Steve Thomas, one of the musicians with locked-in syndrome, communicated through a synthetic voice activated by blinking his eyes. "It was great to hear the musician play the phrase I selected," he said. "I tried to select music that was harmonious with the others. It's very cool."
Miranda, who is a composer by training but also took the time to get a Ph.D. in artificial intelligence, said the next step in the development of Brain Computer Music Interfacing could be detecting the brain signatures that correspond to different moods or emotions.
"Is it possible to detect when you are feeling sad or happy?" he wonders. "That would be a fantastic tool for therapies to treat people who may suffer from depression. If the system is monitoring your brain and detects that [someone] is getting into a depressive state, let’s play some music that will make that change."
Whoa, there -- my iTunes already shuffles me up "Happy" way too much. But other uses could be on the horizon.
"At the end of the day, the technology allows people to make choices, switch things on and off," Miranda says. "You could use it to drive a wheelchair -- instead of having four musical phrases, you could say, 'go forward, backwards, left, right.'"
Go forward, indeed.
Get the best of KQED’s science coverage in your inbox weekly.