I've written here before about the design process of theatrical sound. But I've also gone through a whole string of shows lately in which that clean and methodical process has been truncated and adapted.
You usually start with a Script; with the complete script, matching page numbers, the cuts that were made in rehearsals properly annotated. Except reality for me has been, of late, multiple versions of the script floating around with different page numbers, different cuts -- in fact, large cuts and alterations never documented by Stage Management in any written form.
On at least one show, I put the "Script" aside and mixed the show cold, based on what actually happened on stage. That was how poor the script was as a guide to what would actually happen.
You also would like to start with a Character Chart; with a single document that lists which actor goes with the various multiple roles they might play, so when the MERMAIDS come out to sing you know that two of them were SAILORS in a different scene and one is YOUNG CAPTAIN HOOK, and thus you know the correct microphone channels to turn up.
Well, directors don't get the need for this. More often than not any list I get from a director will be too late to do me any good, and often as not it is wrong, too. The costumer is often a better source for that information, but on one show I went as far as circulating a poll to the individual actors to ask them which numbers they sing in!
Because even when the data is theoretically right, it simply isn't on the Director's radar that three of the MERMAIDS don't sing on the first song, and on the second song two of them are in costume change and enter late. And both of those are absolute death for a mic mix; bring up the theoretical chorus from the paperwork and the audience is treated to dressing room chatter, backstage gossip, and someone swearing as they struggle to get into a costume.
The upshot is that the smart way to program your scenes is to wait until the cast is doing full runs in costume. And when each song begins, put on headphones and start working through the mics. Save the ones that are singing, cut the ones that aren't useful in that number, or that talked, or that made a noise, or that sung badly.
The theory for sound effects is you spot the script yourself, present a draft cue list to the Director, this gets discussed, you try out critical cues in rehearsal, then you sit down with either your good monitors or -- preferably -- the actual speakers you will be using and construct your cues. Then the tech rehearsals are just a matter of tweaking the levels and the placement in time.
Reality is that many Directors have too much on their plates to talk sound with you, and they've made so many (undocumented) changes to the script, you really have no idea what is going to be required until you see the cast doing it. So the script says there was supposed to be a WOLF. Turns out that didn't work for them so they changed it to a SNAKE, and since they had a lot of ensemble that needed more to do, they made it THREE SNAKES. And four hours before rehearsal you finally get an email from the Director in which (after apologizing for not having had time to read your draft cue list) they give their own demands, including the sound of THREE SNAKES menacing the heroes.
Okay, sure. You pull and purchase and mix up something that will go with a trio of ensemble members wriggling menacingly across the stage. And then you get into the actual tech, and you see three people in purple pants jumping up and down like pogo sticks in an upstage corner. That's your "Snakes." Throw away another four hours of work!
So it almost makes sense to construct the majority of cues as you see them and as the Director calls back across the seats "I need a helicopter sound here" (and you can see that what they need is something to go with the actor who is flapping his arms around and spinning in a circle.)
Two things that make this much faster; I've been using DropBox and networking my machines and essentially all my sound effects files are visible from wherever I am. So a lot of the time I don't bother to pull and sort any more; I just audition in the space as I'm building.
And when there is a simple cut, volume change, or layer, I do it in QLab instead of creating a single sound file. It is pretty much as fast to take a thunder sound and add an auto-follow cue that fades the thunder after two seconds, as it is to import it into Audacity and delete half of it and add a fade-out and re-save. And when you do it in QLab, you can alter the result on the fly; make the thunder shorter or longer with a couple of keystrokes.
Lastly, of course, I assign sound effects to hotkeys on the laptop, or to keys on a MIDI keyboard, so I am not restricted to linear playback. I can play a cue, omit that cue if it doesn't work, use it in several places if it works well.
Oh, I should have also mentioned; I always throw a master volume fader on to the top layer of my mixing board, so during the show I can adjust the volume of the sound cues I am playing back. Because even when you do have time to tweak levels and get it all perfect, performance levels change, dynamics change, audience noise changes, and you need to adjust.
So the pretty theory is still there, but the practice -- especially in the short-tech, creatively-charged, younger-cast shows I've done lately -- is likely to be a lot more casual.
Thank you. I use your insights.
ReplyDelete