upper waypoint

Are Computers Becoming Better at Composing Music than Humans?

04:26
Save ArticleSave Article
Failed to save article

Please try again

The possibilities for new sounds are literally limitless with the latest technology. (Photo: Courtesy of Google)

Artificial intelligence is all the rage these days in Silicon Valley  – and no wonder. There appears to be no end to the possible applications. Some say AI is simply freeing humans of the boring tasks, so we can pursue activities that bring us joy. But what if AI is better at those things, too? Like, writing music?

For starters, we’re way past the advent of computer-composed music. That hurdle was crossed back in 1957 when professors Lejaren Hiller and Leonard Isaacson at the University of Illinois at Urbana-Champaign programmed the “Illiac Suite for String Quartet,” on the ILLIAC I computer.

Another big moment in computer music history: 1996, when Brian Eno’s album “Generative Music 1” was released on floppy disk, an old form of data storage familiar to Baby Boomers.

Here’s Eno back in the day talking about it on the now defunct BBC Radio 3 program, Mixing It. “To explain this simply, in the computer there’s a little synthesizer, basically. What I do is provide sets of rules that tell the computer how to make that sound card work,” Eno says.

Sponsored

The music his programming generated was different every time the program was run, but the code essentially dictated the output.

Today, scientists at lots of tech companies are working on something a little more sophisticated. Neural networks develop their own rules from the materials they’re fed.

Research scientist Doug Eck runs a group at Google called Magenta. “I think that what we’re doing that’s different from previous attempts to apply technology and computation to art is really caring about machine learning, specifically. Deep neural networks. Recurrent neural networks. Reinforcement learning.  I guess the best way to put it is: it’s easier to help a machine learn to solve a problem with data than to try to build the solution in.”

Translation: they’re crafting software that loosely imitates how your brain works.

It’s amazing how much we take for granted enjoying just about any composition. Musicians — and scientists — will tell you there’s a shockingly long list of things your brain is responding to: including rhythm, tonality, repetition, but also the way the melody develops, so it’s not exactly the same thing you heard a few bars before. You can write code that mimics all that? Scientists are getting there.

Software engineer Dan Abolafia says, “We starting to give these neural networks memory, to be able to remember what it did in a piece of music and have more intention about how it wants to build on that— which is surprisingly not something we were able to do not too long ago,” Abolafia says.

AI is getting better all the time. Performance RNN, for instance, is a recurrent neural network from the Google Magenta team designed to model polyphonic music with expressive timing and dynamics.
AI is getting better all the time. Performance RNN, for instance, is a recurrent neural network from the Google Magenta team designed to model polyphonic music with expressive timing and dynamics. (Photo: Courtesy of Google)

He adds, “the second, much harder goal is to give the computer an ear, so to speak. To hear the music and decide if it sounds good or not. That’s something that people are starting to do with a technique called reinforcement learning.

So where is this all going? The team at Google won’t tell you they’re trying to replace human composers, so much as they’re trying to help human composers with a new set of tools.

Eck explains, “We’re talking about building tools for creative people. So at the end of the day, I don’t have this vision that someone is going to take Magenta-generated music and just kind of sit in a chair with two big speakers, and say ‘OK, I’m going to listen to this.’ I want the models to generate interesting music, and interesting art.  so that you can try to do some new things.”

Think about the advent of photography. It didn’t end visual art. Instead, it shifted what artists wanted to say and how. Google has made its code open to the public, to encourage people like music producer Andrew Huang to play with it (and also promote it).

Among other things, Huang got jazzed about the capacity of Google’s software to smerge totally unrelated sounds to create new sounds. “What if we take this baby goat and combine that with that 3D printer? Oops, just summoned Satan,” he says of the result.

Huang crowd-sourced a bunch of raw sounds from his social media fans, and sent that along to Google along with some drum samples. Google smerged a bunch of new sounds, sent them back to Huang and he arranged them into a compelling composition.

It’s not bad. It’s not quite my cup of tea, either, but a lot of what makes music exciting to me is messy, idiosyncratic and specific to time and place. Then again, wait a few years, and it’s possible AI will be able to replicate that, too.

lower waypoint
next waypoint
You Can Get Free Ice Cream on Tuesday — No CatchSunnyvale’s Hottest Late-Night Food Spot Is the 24-Hour Indian Grocery StoreCalvin Keys, Widely Loved Jazz Guitarist With Endless Soul, Dies at 82This Sleek Taiwanese Street Food Lounge Serves Beef Noodle Soup Until 2:30 a.m.Minnie Bell’s New Soul Food Restaurant in the Fillmore Is a HomecomingHow Low Key Became the Coolest Skate Shop in San FranciscoTicket Alert: Charli XCX and Troye Sivan Are Coming to San FrancsicoHere’s What Bay Area Rappers Are Eating (According to Their Lyrics)The World Naked Bike Ride Is Happening on 4/20 in San FranciscoThree Eye-Opening Documentaries You Can Stream Right Now