Wonderland Podcast with Steven Johnson: Episode 3

Strange Loops and Circuit Benders

Or, how new music comes from broken machines

14 min read

A mysterious stranger in San Francisco’s Union Square helps jumpstart a sonic revolution with a malfunctioning tape recorder. The story of how experimental sounds become part of the musical mainstream, with special guests Brian Eno, Alex Ross, Caroline Shaw, Carla Scaletti, and Antenes.

Read Steven Johnson’s background for Episode 3. The team would like to give additional thanks to Mike Rugnetta and John Dimatos.

Hosted by Steven Johnon
Produced by Kristen Taylor
Audio engineering & music editing by Jason Oberholtzer
Theme music by Steven Johnson

Listen to the previous episode: 32 Dots Per Spaceship (Or, the Video Game That Changed Tech History)
Listen to the next episode: “
Airplanes, Zoos, and Infinite Chickens (Or, Why Do Humans Like to Play?)

Subscribe now

iTunes // Google Play Music // TuneIn // Stitcher // RSS
For Alexa, say “play “‘Wonderland Podcast with Steven Johnson'”

Transcript

CAROLINE SHAW: Instead of, you know, four colors, or seven colors, or 10 colors in the voice, there are 100, there are 500, there are 1,000 little tiny gradations that are so beautiful.

ANTENES: Why do people explore ham radio? Why do people circuit bend? Why do people walk up to an antenna and, and look at it and, and wonder, “Oh, what’s happening there?”

BRIAN ENO: We’re investigating ways of being and ways of seeing and ways of thinking about things.

ALEX ROSS: That’s making music sound more like the chaotic and unstable world in which we live.

CARLA SCALETTI: Music is a path, more of an abstract path. When you follow it, by the end you’re changed.

STEVEN JOHNSON: I’m Steven Johnson and this is “Wonderland.” “Wonderland” is brought to you by Microsoft, and also by Riverhead Books, publisher of my new book, Wonderland: How Play Made the Modern World.

It’s the fall of 1964 in San Francisco’s Union Square. There’s the usual bustle of a popular urban center, shoppers out browsing the latest fashions, and crowds gathered around street performers, and sidewalk preachers chanting about the apocalypse. And in the middle of all that tumult, a young man quietly scans the environment with a microphone attached to a portable tape recorder. He’s not doing surveillance. He’s picking up sounds in the hope that he can turn them into a new kind of music. Months later he’s back in his home studio and he revisits his recordings from that day. Only, something goes wrong with the playback. It’s a mistake, a malfunction. But that malfunction ends up opening a new world of sonic possibilities. More than 50 years later, we’re still hearing echoes of that mistake.

This is a story about how new sounds come into the world and eventually find their way into songs that we love and the music that moves us.

[CLIP OF CARLY RAE JEPSEN’S “WHEN I NEEDED YOU”]

STEVEN JOHNSON: Listen to today’s Top 40, whether you’re a fan of pop music or not, and one of the things that is striking about it is that it’s filled with sounds you would have never heard just 20 or 30 years ago. The melodies and the chord progressions actually aren’t all that different, but the textures and the timbres are.

[CLIP OF KANYE WEST’S “BEAUTIFUL MORNING FT. FUTURE”]

STEVEN JOHNSON: I’ve always been fascinated by that kind of sonic innovation. We talk about musicians playing their instruments, but there’s also a different, and I think equally important, tradition of musicians playing with their instruments, coaxing new sounds out of them, or even taking machines that weren’t originally designed to be musical at all and finding something intriguing in the sounds that they make.

ANTENES: When I’m pushing or gesturing on a joystick. The nuances of how I’m moving translate so well.

STEVEN JOHNSON: Yes, she said joystick.

ANTENES: Hi, I’m Antenes, and I am an electronics artists, instrument builder, DJ, and electronic music producer.

STEVEN JOHNSON: Antenes builds her own instruments, including the one she’s talking about, which uses a video game joystick as its controller.

ANTENES: To me, I was interested in the idea of being a musician, but also a technician and an operator, all at the same time.

STEVEN JOHNSON: Operator is the right word, since one of Antenes’s creations uses a telephone switchboard as its interface.

ANTENES: Repurposing obsolete telephone equipment, telephone switchboards, etc, and turning them into a working modular system.

STEVEN JOHNSON: Modular systems or synthesizers actually look a good deal like an old-style telephone operating board with patch cords connecting different jacks. Moving the wires from jack to jack, like an operator connecting long distance calls, determines the sound that’s going to come out of this machine.

ANTENES: The machines in an analog system have so many capabilities of being modulated by other sources that the combinations that you’re able to put together when making a sound can become so complex that even you lose sight of what that patch is doing.

STEVEN JOHNSON: Some of the methods she uses to explore sound possibilities involve techniques that alter the very nature of what the machine is. Techniques like circuit bending.

ANTENES: You just kind of take two wires while the instrument is playing and you poke around in the circuits, and when you find an interesting short in it that creates an interesting musical expression or result, then you just solder wires to it and you put a switch on it so that you can switch on and off that little discovery that you made.

STEVEN JOHNSON: Now with all this complex bending of circuitry and electrical engineering, you might be tempted to think that the relationship between technology and human artistic endeavor is a relatively modern one. But this is an old story. A very old story.

ALEX ROSS: Bach, Mozart, Beethoven, all the rest of them, were, were constantly working with machines, with devices, with, you know, taking an interest in the latest technological modifications. Very often of keyboard instruments, you know, Beethoven always wanted a bigger, louder, piano.

STEVEN JOHNSON: That’s New Yorker music critic, Alex Ross, author of one of the great books about 20th century music, The Rest is Noise.

ALEX ROSS: But, you know, also there was this urge to incorporate the noise of the city, the noise of war. You see, I think, right in the first couple of decades of the 20th century, or even even at the end of the 19th century, an interest in sliding sounds, glissando, which I think is obviously related to the phenomenon of the siren on the city street, as well as, I think, during the First World War, the sounds of falling shells, just this, these kinds of unpitched screeches, or, you know, a pitch that’s sliding. You know, some of these sounds had existed, but they became so prominent somehow in the kind of, you know, soundscape of early 20th century life that composers really wanted to incorporate them.

Varèse, in his great pieces of the early 1920s, mid-1920s, used siren sounds, glissando sounds, including actual sirens as, as part of the, the orchestra. And that points toward the mid-20th century avant-garde, where the entire orchestra, in works of Xenakis, is sliding gradually from, from sort of, across a very wide spectrum of pitches. And then, you know, electronic sounds, creation of electronic instruments, a lot of that just sort of, you know, swamp the field completely.

STEVEN JOHNSON: One of the key figures in that 20th century explosion of new sounds and new machines is Brian Eno, whose musical innovations managed to influence everyone from experimental artists like Basinski, to chart-topping bands like U2 and Coldplay. A few months ago, I dropped by his Notting Hill studio to talk about machines and music.

It’s interesting with music that you have something that is so ethereal and so abstract. And yet, in a way, of all the art forms, I think it’s been the most, particularly recently, driven by technological changes. So you have this really interesting mix of like people actually doing engineering, you know, and the physical, material, and the scientific understanding of the physics of sound and things like that, are driving this art form that’s, at the same time, the most abstract, the most removed from kind of everyday life.

BRIAN ENO: Yes. The circle is completed because as soon as a new instrument is made, people start composing new music for it. People start thinking of musical possibilities that weren’t thinkable before. So you know, when, when the Steinway, with the third pedal, came out in the, I don’t know, early part of 20th century, suddenly composers started writing, Debussy and so on, started writing music that you couldn’t have played before. I mean we notice this every day actually I should say now, with software-based instruments. We have a revolution equivalent to the grand piano about every day.

STEVEN JOHNSON: So your first electronic instrument was the VCS3, is that right?

BRIAN ENO: Yeah.

STEVEN JOHNSON: And do you remember what it was like experiencing it for the first time? How, was it an immediate sense of, “Oh, everything has changed, this is a completely new landscape”? Or did it take a while for you to realize the possibilities of it?

BRIAN ENO: Well, I had, I had a little bit of form beforehand because I had made a very simple synthesizer for myself out of two signals generators. You know what those are? They’re just test devices for testing equipment, but you go [makes sounds]. But the thing about the VCS that was really interesting was that you could take another electric instrument, and plug it in, and then start to do things with that sound. That was really the thing that hooked me. It wasn’t playing on it, you know, in the sense of playing like a keyboard thing, it was actually being able to take another instrument and do with it the kinds of things that you could only do in recording studios, but do it live with the instrument. So that, that was what thrilled me, that you could take a, and also I love the idea of an instrument that was played by two people. You know, somebody playing electric guitar, then it comes into my thing and I’m doing something with the electric guitar. So on stage we could start to get a sound that really nobody had ever heard done on stage before.

STEVEN JOHNSON: Sometimes exploring the possibilities of a new musical tool is about deliberately causing it to break. Starting in the 1950s guitarists playing through tube amplifiers noticed that they could make an intriguing new kind of sound by overdriving the amp, a crunchy layer of noise on top of the notes generated by strumming the strings of the guitar itself. Now this was, technically speaking, the sound of the amplifier malfunctioning, distorting the sound it had been designed to reproduce. To most ears, it sounded like something was broken with the equipment, but a small group of musicians began to hear something appealing in the sound. And before long, engineers were creating guitar effects boxes that deliberately created the buzz of distortion, launching the signature sound of 60s rock and roll.

The same thing happened right around that period with feedback. Feedback is one of those sounds that is entirely a creature of the electronic age. It was incapable of existing in any form until the invention of speakers and microphones, roughly a century ago. Sound engineers would go to great lengths to eliminate feedback from recordings or concert settings, positioning the microphone so that they didn’t pick up a signal from the speakers and thus cause the infinite loop screech of feedback. Yet once again, one person’s malfunction turned out to be another person’s music. The Beatles famously stumbled across the musical potential of feedback while recording their early hit “I Feel Fine.” John Lennon accidentally leaned his Gibson guitar against an amp, and the amp suddenly began to wail. They took that broken sound and put it at the very opening of “I Feel Fine.” The very first known appearance of feedback on a pop recording.

[CLIP OF THE BEATLES’S “I FEEL FINE”]

STEVEN JOHNSON: Within a few years, artists like Jimi Hendrix, or Led Zeppelin, and later, punk experimentalists like Sonic Youth, embraced the sound in their recordings and performances.

We all know that innovations often come out of serendipity and happy accidents, but there’s something about the history of music that draws on malfunctioning machines for inspiration. Sometimes the way a new technology or instrument breaks is almost as interesting as the way it works. And that takes us back to that mysterious figure in Union Square in 1964.

ALEX ROSS: So it’s a wonderful story because it, it is one of technology dictating, machines dictating, to an aware and perceptive creative artist, a whole new way of thinking.

STEVEN JOHNSON: The creative artist in question was Steve Reich, who would go on to become one of the most influential composers of the last 40 years.

ALEX ROSS: He’d been doing field recordings in San Francisco in the early and mid-60s, and, and making tape pieces out of voices that he’d recorded, which is very much kind of what was going on in that period in music, especially avant-garde music. So he was in Union Square in San Francisco and recording a preacher named Brother Walter who was essentially preaching about the end of the world, and he had this, this fragment of him saying, “It’s gonna rain.”

STEVEN JOHNSON: Several months later, Reich was experimenting with a tape loop of Brother Walter’s voice. He set up two tape machines to play the same clip simultaneously, but it turned out one of the machines was playing the sound slightly slower than the other one. The effect was what composers call a canon.

ALEX ROSS: And this was a very particular kind of canon where, you know, the one strand was slowly speeding ahead of the other, but, but he heard these, these constantly shifting patterns, as, you know, the voices, you know, went out of alignment, and sort of this increasing disparity. And then he took that idea and applied it to instruments.

STEVEN JOHNSON: Technically speaking, the machine was failing Reich, failing to reproduce the sound accurately. And yet he found something arresting in the sound of that failure.

[CLIP FROM STEVE REICH’S “IT’S GONNA RAIN”]

STEVEN JOHNSON :Reich used the phasing loops of Brother Walter on an early composition called “It’s Gonna Rain.” He then applied the same technique to recurring patterns of musical notes using pianos, or organs, or string instruments in influential minimalist pieces like “Piano Phase” and “Music for 18 Musicians.”

[CLIP FROM STEVE REICH’S “MUSIC FOR 18 MUSICIANS]

STEVEN JOHNSON: Steve Reich’s tape loop projects inspired a whole generation of experimental composers, who then went on to do even more adventurous work with electronic instruments.

CARLA SCALETTI: One night, it was pretty late at night, and I was working in the studio with tape, which at that time meant that we took razorblades and tape and cut up pieces of the tape and could rearrange the order of events.

STEVEN JOHNSON: That’s Carla Scaletti, harpist, composer, and music technologist.

CARLA SCALETTI: And one moment, maybe because it was so late at night, I just suddenly had this epiphany that I’m holding sound in my hands and I can rearrange time just like, or similar to the way that a filmmaker can rearrange time by cutting up film and editing it and rearranging it. And it was the first time I, I felt, I really deeply felt in my gut that electronic music was something radically different from all the music that had come before.

STEVEN JOHNSON: Steve Reich’s tape experiments wouldn’t just influence the world of avant-garde classical composition. Brian Eno began experimenting with tape loops in his early ambient recordings in the 1970s.

BRIAN ENO: The important thing there was to do with the possibilities of tape recorders, which actually was the basis of a lot of the things I did. That was why I was so interested in the recording studio, because as soon as music’s on tape, it, it’s not ephemeral anymore. It’s suddenly plastic. You know, it’s malleable. You can start to do things with it. Now, that sounds so obvious now, but it wasn’t very, very obvious to most people in the 70s even, when people would say, when you told them how you were making records, they’d say, “That’s cheating, isn’t it? Anyone could do that.”

STEVEN JOHNSON: And then it becomes sampling, right? Then it becomes a central part of the vocabulary.

BRIAN ENO: Cheating is actually all I ever do.

STEVEN JOHNSON: Collaborating with David Byrne, Eno laced their 1981 record, My Life in the Bush of Ghosts, with rhythmic loops of speech.

[CLIP FROM BRIAN ENO AND DAVID BYRNE’S “HELP ME SOMEBODY”]

STEVEN JOHNSON: That record had a profound impact on the early rap producer Hank Shocklee, who helped create the layered pulsing vocal samples of early Public Enemy records like “It Takes a Nation of Millions To Hold Us Back,” or “Fear of a Black Planet,” arguably two of the most influential records of the 1980s.

[CLIP FROM PUBLIC ENEMY’S “IT TAKES A NATION OF MILLIONS TO HOLD US BACK”]

STEVEN JOHNSON: Today, you can hear audio descendants of Reich’s rhythmic spoken word tape loops on songs by Kendrick Lamar and Kanye West.

[CLIP FROM KANYE WEST’S “RUNAWAY”]

STEVEN JOHNSON: And, of course, digital technology now allows us to stretch and contort our voices in all sorts of new ways, like reverb, an effect as old as the human voice itself, and a concept explored by Carla Scaletti in her piece “Conductus.” Scaletti conceived the piece specifically for a German church, and used digital software to capture and manipulate the way the space itself was altering the sound of the singers. The reverb of the church becomes its own instrument.

CARLA SCALETTI: Franz had warned me that the reverberation was really intense in this church and that it could be difficult to write music for it that wouldn’t get lost in all this reverberation. So that’s when I decided to make reverberation another voice of the piece, or make it a central performer of the piece, because I had this, this vague notion that in the reverberation, if we would excite the reverberation in that church, we might hear echoes and reverberations of all the other words and all the other sounds that have been made in that church.

So every space has a kind of a unique signature called its impulse response. If you walk into a space and just clap your hands, you’ll hear its unique impulse response, and in fact I’ve heard about people who are blind being able to do echolocation by making clicks and listening to that impulse response, and actually learning something about the space, like the size and whether there’s something in front of them. So I wanted to excite that impulse response, and that’s why I decided to put the singers in tap shoes. So every time they take a step, they’re exciting that unique impulse response.

STEVEN JOHNSON: But in the end you don’t even need external technology, not tape loops or digital synthesizers, to explore the sonic possibilities of the human voice.

CAROLINE SHAW: [Sings note] I can sing that same note in a head voice [sings note] or more like an opera singer [sings lower note]. They’re totally different colors, but it’s the same notes, the same voice.

STEVEN JOHNSON: That’s Caroline Shaw.

CAROLINE SHAW: My name is Caroline Shaw. I’m a musician.

STEVEN JOHNSON: A violinist, singer, and composer, Shaw is most well-known for her compositions that push the boundaries of the human voice in all of its strange and beautiful possibilities.

CAROLINE SHAW: It started out just by being interested in the sound of vocal fry, which is this, “Uh” sound, you know? I call it the sort of like, late night bar white girl sound. “Oh my god.” But if that just emerges into what we think of as sort of just singing, [sings note], where’s the line between those two things, where does it break, and how do you project that kind of energy into a piece of music?

I think I’ve always, [phone beeps] oh sorry, been interested in linguistics, so listening to, to different sounds of phonemes, morphemes, dissecting language in that way since I was like in high school. [Enunciates different vowel sounds]. I think I became obsessed with thinking about the cavity of the voice as this cathedral, or concert hall, that can be shaped in different ways that creates different, sort of, natural resonant frequencies inside. And the sort of simplest demonstration of that is going from [sings low “oh” note], almost like saying the Russian “L”, like “bolago,” but if you sustain that [sings longer “oh” note] it’s this beautiful otherworldly color. And then the same thing, [sings “oh”, “ah”, “ah” at different notes]. If you really harness those, and don’t let them just pass by in language like we do if we’re singing a song or speaking words, but really sit on them, I find I’m, like, obsessed with it.

[CLIP FROM CAROLINE SHAW’S “PARTITA”]

STEVEN JOHNSON: Like Eno before her, Caroline Shaw’s sonic explorations have crossed over into the popular domain. She’s recently collaborated on several tracks with Kanye West.

CAROLINE SHAW: I’ve done a little bit of production for things with Kanye where it’s combining the sound of the voice and really, you know, highlighting the flesh of it, and the softness of it, in the 2k, 3k region that really creates that particular vocal timbre, but combining it with a certain synthesizer sound or a vocoder, where you can create, you know, an entire choir, an epic synthetic choir, that you couldn’t otherwise. That is a really, that’s what I’m interested in right now actually.

[CLIP OF KANYE WEST’S “SAY YOU WILL”]

STEVEN JOHNSON: This is the way innovation happens often in music. Someone pushes a machine, or an instrument, or their own voice to make a new kind of sound. And initially it just perplexes most of us, or just sounds like noise, and then it sounds interesting and provocative in an experimental sort of way. And then slowly the popular ear comes around to it, thanks to connectors like Brian Eno or Hank Shocklee or Caroline Shaw. But sometimes sounds stay at the margins. They remain stubbornly experimental. I asked Alex Ross why that happens.

ALEX ROSS: Yeah, that’s a really interesting question. And you’re right that, that, that, you know, more extreme forms of noise, in a weird way, are more easily assimilated than a much more sort of elementary combination of tones, in the form of a dissonant chord, still causes so much mental trouble for people. There seems to be some level of dissonance past which, you know, you just enter it into a non-commercial area.

STEVEN JOHNSON: But even if those atonal chords haven’t found their way into the Top 40, when you look back at the history, from Brother Walter chanting, to the howls of Jimi Hendrix’s guitars, to Caroline Shaw’s obsession with vocal fry, it is kind of amazing how our ears can be taught to find music in the most unlikely sounds, if we have the right guides. I think back to Antenes, experimenting with her electronic switchboards. I’m sure, for some of us, it doesn’t quite sound like music to our ears yet, but give it time.

[END CREDITS]

[EPISODE ENDS]

Listen to the previous episode: 32 Dots Per Spaceship (Or, the Video Game That Changed Tech History)
Listen to the next episode: “
Airplanes, Zoos, and Infinite Chickens (Or, Why Do Humans Like to Play?)

The logo for the Wonderland Podcast with Steven Johnson

How We Get To Next was a magazine that explored the future of science, technology, and culture from 2014 to 2019. Wonderland is a ten-part podcast series from Steven Johnson about the past and future of play and innovation. Featuring conversations about creativity and invention with leading contemporary scientists, programmers, musicians, and more, the show is brought to you by Microsoft, and by Riverhead Books.

For the Fun of It

Steven Johnson
9 min read

Sphere of Life

Steven Johnson
8 min read

Party in the Front

Steven Johnson
11 min read