CROSSROADS 2.0: MUSIC AND ARTIFICIAL INTELLIGENCE
September / October, 2017
CROSSROADS 2.0: MUSIC AND ARTIFICIAL INTELLIGENCE
Features
Will Al lift popular music to new heights— or perfect the life right out of it?
AARON CARNES
Jesse Engel adjusts his laptop with the fervor of an obsessed conductor rifling through a score. He’s tall, mid-30s, with a scruffy beard and dark hair dangling over his shoulders. With his skinny jeans and the kaleidoscopic design on his white T-shirt, he looks more like a touring bassist than a Google research scientist.
We’re on the fifth floor of Google’s San Francisco office, inside a well-equipped music stu-
dio, to learn about Project Magenta, an open-source
endeavor that uses artificial
intelligence and machine
learning to create tools for artists. Magenta offers codes for users to tinker with, including artificial neural networks designed to resemble the neurons of the human brain and pretrained to perform various tasks. It also offers a community for creative types and techies to share their own experiments.
At a trio of keyboards (the musical kind) sits Douglas Eck, a senior staff research scientist at Google.
Somewhat preppy-looking and 48 years old, he’s the guy who conceived of Project Magenta in 2015.
It was launched into the public sphere in June 2016.
We’re about to have a jam session. This wasn’t planned; Eck and Engel were going to demo some software for me and my friend and bandmate Adam Davis. But Eck seems intent on keeping things random. How better to understand Magenta than to jam with it?
I’m behind an electronic drum set. Davis grabs a Fender Strat. Eck sits behind the keys, eager to start playing. On the theremin (an electronic instrumentyou play by waving your hands over it like a wizard) is Kory Mathewson, a summer intern who spends his free time performing improv comedy with robots. This, Eck tells me with agrin, was a factor in Mathewson’s hire.
That leaves one last member of our band: a program called A.I. Duet, which uses Magenta technology. You play a melody on an attached keyboard, and the program recognizes overall patterns based on note selection, rhythm, syncopation and music it has heard previously. It then generates new lines, almost call-andresponse style. The in-house version we’re using can control the randomness of the response depending on how high you set the “temperature.” Engel selects bass guitar for the sound and starts improvising melodies. Once he gets a response he likes, he locks it in. He picks up a black Gibson Les Paul guitar, and the jam begins.
What we come up with is okay, but I’m more impressed with the bass line coming out of A. I. Duet. If we were an actual band, I’m certain we’d jam with it again. Musicians are often strapped for fresh ideas, and this program seems perfect for spitting out an unending supply of them.
“It’s like having another musician in the room,” Davis says. He too has a go on Duet. As he settles into the instrument, he starts to color outside the lines, to the clear delight of our bandmates. (“We’re really looking forward to seeing how people break this,” Engel says.) He cranks the temperature all the way up, producing manic, scary melodies that barely resemble the ones he inputs. He plays “Twinkle, Twinkle, Little Star” just to see what happens. At full temperature, it sounds like a Christmas song having a bad trip.
“Technology is incredibly important for art,” Eck says. “Think of strumming this guitar chord with your guitar unplugged. Now plug it into this awesome amp. You rely a lot on technology to get the job done.”
What they’re doing, he adds, is akin to building abetter guitar pedal—even the electric guitar itself. That’s the hope, at least. Right now, they’re not sure howthe technology will impact music or if it will be used as they intend. I’m reminded of Auto-Tune, the recording software whose original function was to correct pitchy vocal tracks for bland pop music; rappers like T-Pain and Kanye West then flipped it on its head, maxing out its properties and creating a strange new sound.
Whatever happens with AI music-generation technology, Eck has a feeling it will be monumental. “We’ll follow the musicians,” he says.
“If someone picks up on something we’re doing and does something awesome with it, we’ll probably be like, ‘Oh, let’s do more of that!’ ” Listening to Eck, I can tell his brain is geared to geek out on algorithms, but it’s a passion for music that guides him to make tools for musicians of all stripes. When I ask him about his own music, I’m surprised to learn he didn’t spend his salad days hunched over banks of synthesizers; he shyly tells me of his experiences strumming a guitar at half-full coffee shops.
The marriage of artificial intelligence and music is not new, nor is the debate over the proper amount of human input. In the 1950s, avant-garde composers Lejaren Hiller and Leonard Isaacson used a computer to write “Illiac Suite.” The score consists of statistically generated notes following certain musical principles—harmonic interdependence, for example—according to rules based on the composers’ knowledge of traditional music.
Until recently, Al-assisted music stayed primarily in academia. Now Google, Sony, IBM and other companies are getting involved in major research projects. And start-ups like Jukedeck, which uses AI to generate jingles and background music for videos, and Brain.fm, which creates functional Al-generated music (designed to help you relax, focus, sleep, etc.), are cashing in on the technology.
The various groups are seeing some similar results, but each has its own agenda. Google is working on tools for artists and designing openended, experimental programs with the goal of fueling creativity. Jukedeck has focused its resources on ease of use, marketing to businesses looking to save money on video soundtracks. Jukedeck co-founder Patrick Stobbs tells me the company hopes to expand its customer base to include anyone who’s interested in making music: It wants to use the technology to help would-be musicians write songs without having to learn to play an instrument.
“Much like Instagram has made it easier to create great photographs, we see Jukedeck as a creative tool that lets many more people make music easily, and with more power,” he says.
A handful of music software apps using similar AI technology popped up almost a decade ago, with mostly cringe-worthy results. Microsoft’s Songsmith generates Gasio-keyboard-esque accompaniments to acappella vocal tracks. The program inspired countless hilarious videos
featuring, for example, Freddie Mercury’s searing performance of “We Will Rock You” set to what sounds like vaguely Latin Muzak.
Much has changed since then, though technology and taste continue to clash. Last year the world got a glimpse of what Al-generative music will sound like when it rolls out in the near future. Producer Alex Da Kid collaborated with IBM’s Watson supercomputer to create the catchy emotional ballad “Not Easy.” Watson consumed five years of cultural data, including news headlines, internet searches, movie synopses and social media content, to analyze trends and people’s emotions around them. It also processed more than 26,000 recent popular songs to find common themes and musical patterns. Using this information, the supercomputer determined the “emotional fingerprint” of recent history and the musical components that elicited strong emotional responses from listeners. Alex then used this data to choose the theme of heartbreak, as well as musical phrases and lyrical fragments that he liked.
With a Grammy nominee at the helm, the collaboration produced a song intended for as wide an audience as possible. “Not Easy” reached number four on the iTunes Hot Tracks chart within 48 hours while managing to be completely forgettable, save for the story behind it. In trying to take the emotional temperature of millions of people, it seemed to
express...nothing. The Auto-Tune comparison arises again: In the hands of those aiming to perfect popular music, AI technology could very well achieve its ultimate success by digitizing the life out of it.
Sony Computer Science Laboratories in Paris, with funding from the European Research Council, began a five-year project creating Al-assisted pop in the summer of 2012. The results are somewhere between “Not Easy” and Magenta. Sony GSL director Francois Pachet, a computer scientist and jazz and pop musician, has been involved with music and AI since the 1990s. He and his team
“WE’RE REALLY LOOKING FORWARD TO SEEING HOW PEOPLE DREAK THIS.”
have been working more directly with artists than Google has, and their objective—to create innovative pop-based music with indie artists using a set of algorithms called Flow Machines—emphasizes output.
One Flow Machines-assisted song, “Daddy’s Gar,” went viral last September. A collaboration with French composer Benoit Garre, it sounds strikingly similar to mid-1960s Beatles—perhaps because for the song’s creation, Flow Machines was fed an exclusive diet of Fab Four tracks.
“ ‘Daddy’s Gar’ was a bit like pastiche,” says Sony GSL communications officer Fiammetta Ghedini. “It’s a style exercise. You have an expectation about what would happen if the Beatles were together again.”
Flow Machines’ understanding of “style” is data-dependent—meaning you train it on specific music and it predicts the kinds of choices someone composing in that style might make. In other words, it understands the rules of music based solely on the chosen data set and the constraints the programmer sets up. The program gave Garre several Beatles-esque melodies and chord suggestions from which to assemble the song.
To help me understand the artistic potential of Flow Machines, Ghedini directs me to a different, less popular song Garre wrote with the software, called “Mr. Shadow.” Its data set consisted of 429 songs by “classic American songwriters” including Cole Porter, George Gershwin, Richard Rodgers and Vernon Duke. I listen to it several times in a row; the only description I can give is “unsettling avant pop.” The chord sequences are foreboding, the melody simultaneously dreamy and menacing. One YouTuber comments: “Perfect music for when terminators kill humans with Gatling guns in a couple of decades.” I don’t know if I enjoy the song, but I feel I’m getting a glimpse of a different breed of music, and I want more.
A 2016 Paris concert featuring several artists who have collaborated with Flow Machines, much of it on YouTube, provides that glimpse. The outputs are all artistically fascinating and unique, and vastly more interesting than Alex Da Kid’s song. Garre hopes to release an album of his music with Flow Machines later this year. Other artists may follow suit.
Google has been working with artists on a limited level. Eck prefers that his tools be applied to experimental music rather than toward churn-
ing out hit singles assembly-line style or producing functional background music, even though those are potentially big markets. (Jukedeck has made great strides toward creating an unlimited library of royalty-free video tracks, which will be good for production budgets but bad for composers.) He hopes to affect the artistic direction of music, and the best way to do that, he believes, is to give tools to cutting-edge artists who may not appeal to mainstream tastes.
“We should be actively trying to make music not everyone likes,” Eck tells me. “If it reaches a certain audience, even if it turns off another, that’s a win. There’s always that novelty effect you get with something new and crazy. Do they come back next week and go, ‘Oh, there’s another song from that AI. I want to hear that one too’?” Here too pop music provides an analogue: Just as megastars from Madonna to Beyonce have drawn inspiration from the fringes, so will tomorrow’s pop royalty weave the innovations of Al-assisted underground artists into their MOR chart toppers.
Eduardo Reck Miranda, a U.K.-based professor who’s been involved in the field since the mid-1980s, has a special place in his heart for the weird crevices in music that AI can illuminate.
Originally from Brazil, Miranda was a composer first but was drawn to the technology as a means to get his music performed. His enthusiasm for the topic is infectious. He tells me about his composition “Symphony of Minds Listening,” for which he developed AI software to remix Beethoven’s Symphony No. 7, using brain scans to deconstruct how people listen to the music.
“Gan machines actually do it? That was the question we were asking,” he says. “But then, when we realized machines can actually do it, and look at how boring the music is, the question shifted: How can those machines do something useful?”
Much of the technology at the root of this debate predates Miranda’s, Eck’s and Pachet’s experiments. And much of the credit for that technology belongs to David Cope—a modernday mystic and a man who may understand the relationship between AI and the creative brain better than anyone else.
For some 30 years, Cope was a faculty member at the University of California, Santa Gruz. Now 76, the soft-spoken ex-professor continues to compose music and pursue other artistic endeavors, mostly using AI.
As I walk up the stairs inside Cope’s cozy Santa Gruz home, he issues a warning: “Get ready for a real wacko moment in your life.” Without further explanation, he opens his
office door to reveal a room that, in direct contrast to the rest of the house, is in total disarray. Hanging from the ceiling are dozens of wind chimes. Books and random objects are tossed about everywhere, as though they’ve just survived a tornado. In the center of the room is an exercise bike.
The chaos, he tells me, is intentional: It enables him to look at any two objects and try to find a relationship between them. Just as his software is for music generation, the disorder is a creative prompt that uses the surprising power of randomness. It says a lot about the fundamentals of how this all works.
Cope has been writing algorithms since he was a teenager. He has penned approximately 100 books, with sections on computergenerated music showing up as early as 1977. In 1980 he began a long-term project called Experiments in Music Intelligence, later shortened to EMI and then to Emmy to avoid copyright issues with the record label. He started developing a new program, Emily Howell, in the 1990s. He doesn’t usually admit this, but he gave it a human name in part to goad the naysayers who claim machines will put human composers out of work. He allows himself a little laugh at that. “They do what we tell them to do,” he says. “They have no self-awareness. They have no consciousness. I define AI as the study of using computers to understand the human brain and how it works. That’s all it is.”
He asks me to consider what composers do when they write. I’m not sure.
“We steal,” he says. “Meaning that the most important part of your algorithm isn’t an algorithm at all. It’s a bunch of music.” I interpret this as the patchwork of melodies, moods and styles that artists stitch together in the process of making their own work. Cope realized that computers needed to operate the same way.
In 1988 he created a data set using hundreds of examples of his own music. Within approximately 24 hours, Emmy had composed “Cradle Falling,” which Cope maintains is one of the best pieces of music he’s written. Listening to this composition and other Emmy selections, he says, proved his theory that composers steal: “It brought out things I didn’t realize were so explicit in my music, which is those composers who affect me most.”
I recognize that what Google and Sony are doing with deep learning is a direct extension of Cope’s data-dependent approach. Miranda agrees, but he adds that the approach is incomplete, which is in part why his creative process has always been experimental.
“A single piece of AI software is not able to fully embody how we make music,” Miranda
says. “The best we can do is to narrow the problem to specific aspects.”
Which leads to another confounding quirk inherent in creative generative models: AI music has no clear metric of success. Languagetranslation software has made huge leaps in recentyears because of machine-learning technology. We can all agree on how successful it is because the results are objective. But how do we agree on the value of music produced by AI software—chart success? Grassroots popularity? Critical acclaim? Eck and his team at Google define it by sustained interest from artists. When musicians continue to use these tools after the initial novelty wears off because they believe the results they’re getting are good—regardless of what audiences think— that’s when AI will take its permanent place in the vast, cluttered kingdom of music.
After we leave Google, Davis and I discuss the experience. “I liked seeing the edge of where things start to break down,” he says. “At certain points it would go off the rails. It wouldn’t know what to do from the information it was given. That created really interesting music.” Jason Freidenfelds, a senior communica-
tions manager at Google, believes that AI’s impact in music will exceed any one technological advancement.
“It may be as big a deal as the original shift from people making music primarily with their own bodies to crafting instruments that had their own unique properties and sounds,” he says. “AI may rival that leap from simple objects to complex instruments. Now the instruments themselves will have intelligence. They won’t just produce unique sounds but unique ideas for new timbres or melodies that musicians can then riff on.”
Later, Freidenfelds sends me two new Magenta-generated piano tracks, made with no human input whatsoever. Their complexity and nuance shock me.
Many artists and music fans are legitimately scared that AI technology will put honest folks out of work while dragging creativity to the lowest common denominator. But Davis’s urge to mess with the tools and find the cracks is exactly what innovators like Eck, Engel, Miranda and Pachet are counting on. Computers, after all, make errors, just like us. And the weird, imperfect and unpredictable human instinct is the engine that will give AI the power to redefine music.
Like what you see? Upgrade your access to finish reading.
- Access all member-only articles from the Playboy archive
- Join member-only Playmate meetups and events
- Priority status across Playboy’s digital ecosystem
- $25 credit to spend in the Playboy Club
- Unlock BTS content from Playboy photoshoots
- 15% discount on Playboy merch and apparel