Monday, December 27, 2010

The Basics of Sound Design II

Let's recap.

Among the tasks of the Sound Designer, the most time-consuming creative task is finding and/or creating the sound effects and transitional cues that will accompany, enhance, and frame the action of the play.

How much sound can do, and how it will do it, depends on the script, the particulars of the production, and how much you can talk the director into it (!)



When we talk about technology, the central question is one of reproduction. How are the sounds to be cued, created, and sent out into the space? A useful distinction divides the possible sounds into three broad groups: the first is practical sounds; crash boxes, starter pistols, percussion players, other mechanical devices which make sound. The second is pre-recorded sounds which are stored on electronic media and played back. The last are hybrids; cues which may be implemented electronically, but are triggered by other than a traditional "called" cue.

Working from the inside out, the traditional "tape" cue is on media like CD or hard disc (and, at a pinch, iPod). Each individual sound effect or operator action is given a cue letter, and those letters are placed in the Stage Manager's book. When the correct moment comes up in the play, the Stage Manager will "call" the cue over headset; "Sound Cue H, Go!" The operator then pushes whatever button makes a sound play (or fade, or stop, or whatever is called for by that cue).

In a typical theater you have a deck or mixing board that is connected to several amps each routed to various speaker options. You can thus send different sound cues at different volumes to different speakers, changing their character; the pre-show music might be out the big speakers, for instance, but the Victrola cue out a speaker hidden inside the prop itself.

In many theaters this is still CD players connected to a mixing board, but increasingly computer-based systems are taking over. The system I use now is a Mac laptop running QLab, a shareware sound playback software; this in turn connects to an eight-output Firewire audio interface, allowing me to program which set of speakers each cue is sent to -- and even change this during playback with another cue, allowing complex surround sound effects to be programmed in.

The other advantage is it does not require technical skills on the part of the operator; they only have a single mouse-click to worry about, whether it is as simple as playing a single gunshot or as complex as fading the pre-show music, starting an Intro, and cross-fading the Intro into an on-stage Victrola that continues to "play" the same piece of music.



Mechanical effects are still alive in theater. Door slams and gun shots almost always sound better if performed live. For the latter, particularly if it is on stage and visible; tape sounds never quite coordinate properly with the firing of a prop gun!

Sometimes the mechanical sounds will be a design decision. For Mystery of Edwin Drood, for instance, an entire second orchestra of wind machines, coconuts, slap sticks and other Victorian-age mechanical sound effects is called for, performed in full sight of the audience.

A similar solution is to have the percussion player perform certain effects. The reasoning is that if they happen in a specific musical place, a drummer or keyboard player is better placed to get them on the beat than a distant sound operator who is in turn beholden to a Stage Manager. And then, in addition, the stylized nature of percussion toys instead of recorded effects often lends itself better to the feel of certain productions.

I mentioned in the previous entry the use of a live, and ear-splitting, factory whistle in Sweeney Todd. Actually, though, the last time I did the show, the live and very real air-powered whistle wasn't ear-shattering enough, so we amplified it -- making it a kind of hybrid sound.


In the category of hybrid effects are all sounds that include some electronic component, or pass through the sound system, but are not explicitly cued as a Stage Manager called cue. A live backstage microphone for the other end of a telephone call, for instance.

I find that doorbells are best when the actor at the door presses the bell themselves. The simple form of this is, well, a doorbell. But the usual electric door bells are often not loud enough, and not clear enough, to work on stage. So you amplify it, or you use a pre-recorded cue that is triggered by the actor; they still press a button to ring the bell, but the resulting sound is different.

This is another place where QLab works so well for me; since I can trigger a cue via MIDI, I can control cues through any device that can output a MIDI event. And since I built my Arduino-based trigger-to-MIDI device, I'm able to create those events with something as simple as a doorbell button, or as complex as a proximity sensor.

New tools are opening up what used to be the closed boxes of "cues" or "sounds," allowing a design to make changes in real time based on actions on stage. As a simple example of this, QLab will play as many simultaneous cues as your hardware can manage. Instead of being forced to think one sound = one event = one cue = one CD track, you can have entire sequences of sound that are built up from different simultaneous events. You can also indulge in more random-access sounds, instead of having to have all cues locked in the order of a CD playlist.

On a recent show, I had a train cue which combined the sound of the engine and track noise (one sound cue to start, one sound cue to fade down) with a hot-button cue to play the whistle. I could manually add in as many whistles as I wanted, whenever they "felt" right in the action, during the performance.

(We had an even more elaborate train during a later show; a selection of "chuff" sounds were assigned to a MIDI keyboard and the Music Director was able to perform a simplified train IN TEMPO over a musical number!)



That's the basics of reproduction, at least as to be covered in this article. The next important bit of technology is paperwork.

In the previous article I mentioned the Show Book. This is a three-ring binder that contains a marked-up master script with all cues, as well as all the other paperwork you need to refer to in making the show happen.

In this paperwork is your Cue Sheet. This gives a designation for the cue, description, page number, notes -- and on older playback systems, details such as volume, speaker assignments, source, processing.

These days, most volume and assignment is programmed into the playback itself (a computer), and the processing is similarly built into the cue.

I should note; my Cue Sheets also indicate non-called cues, like this;

D Victrola (spot) pg. 24 ///VISUAL Mary turns crank, out with Cue E
N/A gunshot (live) pg. 26
E Victrola Explodes (spot) //IMMEDIATELY on gunshot!

The Cue Sheet is your communication and discussion sheet with the Director and Stage Manager; you give them copies to keep them informed of your plans, and date them so updates can be kept in order.

(Because of the ongoing nature of this dialog, my early cue sheets contain no information about playback, but copious notes on the concept of the cue; "D (Victrola) pg. 24; 'Dance of the Sugar Plum Fairies,' processed to sound like old-time recording -- I don't think the Al Jolson will work here." As the show gets closer and the cues begin to appear in rehearsal, I cut out the long descriptions and the side discussion.)

The next piece of important paper is internal; it is your pull list. This is where you figure out what the actual sounds are (as opposed to the sound EFFECTS, which are built up of multiple sounds), and start scribbling ideas about where to find them. My word for these, borrowed from music-sampler terminology, is "partials."



The Workstation. These days, it's all on computer. The early sound designs I assembled on reel-to-reel, dubbing from tape to tape, from record to tape, sometimes adding layers from an old sampler workstation as well. Then came the computer -- and almost instantly, DAW software; software that allowed you to do multi-track editing.

My primary composition tool now is Cubase, a music sequencing software. With the plug-in architecture, the digital audio processing tools, and the multi-track editing, not to mention the seamless integration of MIDI layers, it is the tool in which I create all but the simplest effects.

But the first step is collection. I collect voraciously. I don't just pick "A" horse. I go through my libraries and pull every horse whinny that sounds even close to what I'm looking for. I import them all to the laptop's hard disc, establish a folder and file structure for the show, and then audition them over and over.

This is where the pull list comes in so handy. You pull from libraries, you purchase, you set up Voice-Over recording sessions, you set up foley sessions and set dates for location recording. As you collect material, you mark it off the list.

The great advantage of the laptop is that I can test sounds in the actual space. I can connect to the sound system and play them through the actual speakers the final cue will be played on. I can listen to what it will sound like to the audience. I can try them out live in early rehearsals, and I can try out ideas on the Director. I can also make changes very, very quickly.

For some shows, I am able to try out different ideas during rehearsal. Play one sound, and when it doesn't work, or the director doesn't like it, try a different one. This way I can work in ensemble to narrow down to the sound that best fits the actual production; not have to work at home and hope the final result will fit in.

(One show I came on at the last minute. I actually worked out of iTunes, taking advantage of the large library of period music I'd stuck on my laptop's hard drive -- throwing out idea after idea right there during the rehearsal, sitting at the tech table with the director. In a couple of days, we had roughed out the music cuts that would work for the show.)

One of the related tools the multi-track editor gives me is the ability to use a reference track. If there is a complex effect or sequence of sounds or musical underscore that has to go under a specific section of dialog, I can record that very dialog in the theater, with the actor's own in-scene timing, and import that into CueBase.

I did this for the final "Movie" scene of Play it Again Sam, for instance. I recorded the scene between "Bogart" and "The Girl," and brought it into CuBase along with several tracks of music from a re-recording of Alfred Deutsch's original music for The Maltese Falcon. Then I carefully assembled a careful cross-fade of various bits of the music to follow the contours of the mini-story being played out. The actor in this case was grand; he stayed within a couple of seconds of the original tempo throughout the entire run.

(In many cases you can not trust the actor to remain that on-tempo. If you want to cut sound or music this close -- the technical word for it is "Mickey-Mousing" -- you need to build in devices to allow either the actor to follow, or the operator to adjust. I'll say more on this when I talk about composing for the stage.)

Again, the great advantage of doing this on laptop is that you can tinker with the cue over live rehearsals, and even into Tech Week -- the multi-track sequencer, with all the built-in EQ and reverb and so forth, plays back with a single button press, so you can take the complex cue yourself until it is finally nailed down -- at which time you finally render, stick on a thumb drive, and drop it into the waiting slot on the programmed show.



I do small amounts of tinkering with Audacity, and a few other processing tools, but CuBase and a host of VST plug-ins have most of the processing I need. Increasingly, I am spending time not just picking the right sound, but processing the sound to bring out its character. I find increasingly I need not just "the sound of a horse," but a one-second sound bite that screams "Horse!" to anyone that hears it. Placement helps. Picking the right samples helps. But sitting in the theater dialing the EQ to bring out the character of the sound helps a great deal.

In many cases a little more flavoring is going on. Say I've got a voice on a phone. On the hard disc is the full session with the voice talent. First task is to identify and trim and label the takes. Then audition them. Then, when I have the take that works, I bring it in to CuBase.

Process with compressor and EQ to mimic the small dynamic and frequency range of a telephone. Add noise to taste, from guitar amp simulators. Go into the library of sound effects for dial noise, hang-up noise, and similar, and make a multi-track cue that combines all of these into a little story about a phone call.

Often the vocal recording will need minor surgery. Maybe he got suddenly louder at the end. Compression won't fix it; you have to go in manually and draw an volume envelope or record a fader move. Perhaps she popped some plosives into the mic. Zoom way in to sample resolution, and trim the clips down. Maybe there are long pauses, or extra "ums" and "ehs" in there -- trim up, perhaps going multi-track so you can work trial and error instead of committing to a specific edit. Perhaps he even missed a word; with luck, you can cut in one, or even a single syllable, from a different take.

Within a program like CuBase, everything is run-time. You can make a trim, then un-do it. You can add reverb, then dial it down when it doesn't sound right in the theater. Unlike the old tape days, you can always go back, right back to the beginning if need be.



And there's another thing you can do. With certain kinds of complex sound effect, a number of different things might want to happen in some relationship with the action. Say a car passes right to left. Say a series of explosions occur. Say a nervous horse is pacing and whinneying during a dialog sequence. The trick is that instead of mousing each sound one by one in off-line mode, and playing them back to see if you placed them right, you assign them to a sampler, and then you perform the sequence on a MIDI keyboard.

This is incredibly powerful. The last few train effects I did, I created the playable train and literally played a leaving-the-station and arriving-at-the-station on the keyboard. A second and third recording pass for whistle and bell, then into the editor to add track noise. And then overall processing to add a little doppler shift, and overall volume and pan curves.

Real-time performance is a powerful compositional tool.



The last of the basic tools is of course the most important. And that is your own ears. You need to be listening constantly. What do things really sound like? What are useful sounds happening around you that you could capture and use? Each show will and should draw upon your ability to research and absorb and understand the distinctions between types of trains, between what different winds are "saying" to you emotionally, what is and isn't in period, or what will or won't seem in period to the audience.

As in the sciences, you need to develop a deep mistrust of your own memory and senses. Don't think you know what something sounds like. Know what it sounds like; go back and listen, trying to clear your mind and hear what is really there. Then take your knowledge to construct something artificial that will fool the audience the same way you were once fooled.

And that's all I have to say tonight.

No comments:

Post a Comment