Friday, December 31, 2010

Arduino, ti amo!

Okay, so I signed on to two more shows, and worked all week. No time for 3d modeling. Did do a little electronics, though. Yes, I do get into a lot of different things!

This particular bit of electronics came out of my experiences composing and running sound for a show on short notice, and my discoveries of the opportunities new technology is making available. In particular, Adaptive Audio; the ability to modify music and sounds on the fly and respond directly to the actors on stage (instead of making the actors slave to the sound design).

I had a test project to try out. A proof of concept. I've done several shows now where an actor "fired" a prop gun, and the sound came from the sound system. And it never sounds right. It's just not possible for a sound technician to press "play" on a CD player (or whatever) exactly at the moment the actor squeezes the trigger.

(And, no, blank-firing guns aren't always the right effect. Having a sampled sound effect means you can tune the emotional effect of the sound; perhaps big and Hollywood, perhaps a period-sounding "crack" instead. With blanks, there's one result and one result only.)

So my thought was; put a radio transmitter in the gun. There's lots of things you could do with the received signal: but it seems most flexible to transform it into a MIDI event. Many pieces of theater gear already speak MIDI; light boards, samplers, some show control software (like for instance Qlab, the program I now run all my sound on.)

The actor pulls the trigger, that closes a switch; and sounds play, light effects start up, pyro is fired, whatever. Once you have a common-format command in a wire, all sorts of things can be commanded by it.

But how to hook it up? It isn't as simple as splicing the right kind of cable. MIDI messages are serial data streams with a very specific format.

Enter the microprocessor.

You can almost think of a microprocessor as a very large hammer. It is overkill for many jobs. But at five bucks a pop it can be cheaper than wiring up a bunch of logic chips (or relays...!) to do the same job.

Take something simple like a button de-bounce. What is that? Well, out in the real world a button has one or more metal leaves that are pushed into contact with each other. As they come together, they spark, jump, bounce back a little, etc. Each of those little jumps is like a little button push of its own. Wire up a counter to that button and it might count one, three, even a dozen times each time a human being put their thumb down and pushed.

So you can do little tricks with flipflops, (the old 7474 chip is nice for this), or use a pair of inverters and a capacitor to store a small delay (the 7404 hex inverter was cute for this). Or you can do something completely analog with an RC circuit (resistor and capacitor).

Or you can do it in code. The pseudo-code might be as simple as;

if digitalIn(button) > 0
   { pressed = 1
   { pressed = 0}

Obviously even five bucks is too much to spend to do that for one button (oh, and an embedded processor still needs clock chip, I/O, power regulator, pull up resistors, et al). But for six buttons? What about six buttons, and you have other tasks for it as well?

Such as logic tasks. Say you have a moon effect that should only happen after the sunset sequence has finished running. You can set this up, pseudo-code fashion;

if (command = TRUE and sunsetEffect = complete)
   { run moonEffect}

But it is in creating a specific output stream that a microprocessor really shines. Want a motor to spin? Find a wire and a 2-way switch. Want it to rotate three times clockwise, once counter-clockwise, turn slowly a half-turn, reverse than gradually come to full speed? Write that into a micro.

MIDI messages certainly fit the bill. Just a refresher course here; MIDI stands for "Musical Instrument Digital Interface" and it was drafted in the early 80's by a consortium of musical instrument manufacturers. It is a cross-industry standard by which keyboards, sound modules, sequencers, and so forth can talk to each other.

Most of MIDI is basically music notation for computers. It is a way of saying things like "Play middle C, really loud, and hold it down for a full quarter note."

A MIDI message is almost always a command byte, followed by one or more status bytes. In the case of noteOn, the command is "play a note on channel 15." The next byte is "play f# above middle C" and the last is "play it pianissimo."

Which in integer form is something like "159 66 23." For historical reasons MIDI is usually written out as hex -- which is a little easier to understand than binary -- but unlike, say, Assembly, there are no standard mnemonics. Well, mostly. NoteOn, for instance, is a standard enough notation most MIDI software libraries will use it as a function.

So this brings us back to the Arduino.

Back when I was a heavy electronics hobbyist the best bet for a CPU was something like a Z80. To make it work, you needed to solder up a whole board; shift registers, I/O, clocks, program ROM, etc. And when you programmed, you programmed in Assembly -- if you were lucky! You might have to burn an EEPROM, or throw micro-switches on a hardware controller, one program line at a time.

Back a few years ago the "BASIC Stamp" made an entrance. Along with Pic chips, these were whole families of basically pre-assembled micro-computer cores. Little PC boards with the microprocessor and support all wired up for you. And packaged with them were the cables, programming adaptors, and software that ran on a host computer -- that is, on your own home computer.

Robotics people really got into them. But there are many, many applications, from hobby to semi-pro, from industry to science, for this sort of cheap, embed-able computing power.

The latest down the pipe is a astoundingly simple-to-use open source project called Arduino. The Arduino boards, based around an Atmega8 or Atmega168, include serial, USB, or Bluetooth ports built in. The software environment runs on Mac, PC, and Linux. And a bootloader is pre-burned in the chip, meaning you can program it with nothing but a piece of cable.

I picked up a pre-assembled "Diecimilla" from Adafruit for a little over thirty bucks. The free environment loaded effortlessly on my Mac G4. The board is about 3" by 2" in size, and is powered from the USB cable. Plug it in to your computer, run the "programming program," stick some code in the window and press the "upload" button. Dead simple.

The programming language, "Wiring," is basically C, but with lots of predefined functions that take most of the sweat and guesswork out of it. I was able to pick up the language in a couple of days -- and I am NOT a programmer.

So I got my first Arduino. First task was to write my "Hello, World!" That actually happened within the first hour. Since I had nothing but a naked Arduino to start with, my "Hello, World!" just blinked the LED pre-wired to pin 13.

Next task was to expand the I/O possibilities. I had also bought from Adafruit a "Prototype Shield" in kit form. Basically, this is an empty PC board with headers (and copies of the status LED and reset button) that sits on top of the Arduino exactly covering it.

Soldered that up, read up on serial I/O with the host computer, and wrote a "Hello, World!" that printed those words on my G4 (into a window on the programming environment) when a button was pressed on the Arduino.

That was the first couple of days. I had bought a bit previously a cute little 4-channel RF remote and PC-board receiver from eBay. The header pins fit perfectly into one of the spare header blocks on the Prototype Shield. Soldered up power leads, ran jumpers to the Arduino's input pins -- then had to figure out how to read the output of the receiver.

Finally figured that out. So now when I pressed a button on the keychain-sized remote, a status LED would light on the Arduino, and in a text window on my G4 I'd get a message like "Channel C Received."

That last was a bit of a hack. But let me remind you; I'm not a programmer. This is the first micro I've ever played with. And I was last really busy soldering up blinking LEDs somewhere back in the 80's. The stuff now is almost disturbingly easy. One more fall-out of the digital revolution.

Close of the week (okay, Sunday) and the last step. Wired a MIDI jack to my prototype shield, as per some schematics found online. Stuck that in the "MIDI in" port of my old Yamaha QY10 (a portable sound module and sequencer). Then spent quite a few delightful hours making stupid mistakes about MIDI. I thought I was already fluent, due to a long history of composing SystemExclusive messages for sound modules (don't ask!) but I still made some really basic mistakes.

Until, finally, my test program made a note play.

At which point it was a really quick hack to open up the RF Receiver program I had written a few days earlier, plug in the correct Serial.print commands, and....

I could now play piano notes from across the room.

Which by itself doesn't sound like much. But such is the nature of a proof-of-concept. The proof-of-concept here is that a thirty-dollar embedded microprocessor can be easily programmed to create a custom MIDI message or sequence of messages upon a specific hardware trigger.

So I could, right now, hot-glue the keychain remote into my plastic replica Luger, put a gunshot sample in my Boss "Dr Sample," and have a realistic-sounding stage gun that plays out the big theater speakers.

Or stick a larger button on the Arduino, tape it to the wall of the set, and set up a MIDI-fired hot-key cue in the software that's running sound on the current show. Enter a better doorbell sound; actor-triggered, played through the sound system, with tonality and volume completely under control of the designer.

But why stop there? See, the inputs aren't limited to on or off. I can interface with capacitive sensors, motion sensors, light sensors. I can set up parameters under which the chip will react to the sensors, and I can abstract a continuous flow of data. Say, a position sensor connected to a window. When the actor opens the window, sounds of street noise fade up with it. Or, say an ultrasonic sensor; when an actor gets close to the magical tree, tiny lights begin to twinkle and shimmer in interesting patterns.

Because output is of course not limited to MIDI messages. I can throw relays, light displays, run motors, even communicate to Max or Flash on a connected computer. People have attached these things to cameras and cell phones. Made self-contained musical instruments out of them, and wearable light shows.

I look forward to seeing what theater-related ideas I can come up with for mine.

This is a old entry from my previous journal. Since writing this I have made extensive use of the core technology whose development I described here, and am on version #3 of the hardware. Among the things I have done is check a monitor system from the stage by triggering keyboard or sound samples from my keychain remote, step through and listen to all the sound cues in a show from out in the house, fire off sound effects from a remote button located in the orchestra pit or just over by the Stage Manager, and even created -- but have not yet used in production -- a MIDI-controlled video projector douser.

Thursday, December 30, 2010

Stupid Sound Tricks

In the process of designing and engineering theatrical sound, I've had to do several clever (and far many more patched-together) things to help advance the needs of the show. Below is a list of a few of what I think are more amusing tricks involving actors and microphones.

Telephone Voice ("Guys and Dolls") -- for a real-time telephone conversation, the person on the other end was played by an actor backstage on a microphone. Between him and the speakers was a guitar pedal; a Boss distortion box that provides the compression and crunch of an over-driven guitar amp (the typical sound of electric guitars). The box also has a simple EQ that allowed the telephone voice to be rolled off hard on both sides, mimicking the limited frequency range of POTS (Plain Old Telephone System). With that box, I can dial the telephone effect all the way from "slightly tinny" to "heavily distorted." Although I have done plenty of pre-recorded lines, I prefer when possible to have a live actor that can react to the living timing of the action on stage.

Peter Pan plays Captain Hook ("Peter Pan") -- to be properly convincing, we pre-recorded the lines where Peter was disguising his voice (magically) to sound like Captain Hook and fool Hook's crew. The actress merely mouthed the words. To be really convincing, though, we first had our Peter Pan in the recording session playing Peter imitating Captain Hook, so the actor playing Captain Hook could perform the recordings and imitate the way she would play Peter pretending to be Hook as played by him..... This was the director's idea, by the way, and only underlines my deep preference to having the director on hand for any voice-over session. It takes a director to get a good vocal performance from the talent, and having the director of the actual show there allows them to tailor the recording closer to their needs for the show.

The Ghost Crew ("Rosencranz and Gildenstern") -- the lights go down, and in the blackout transition into the next scene voices are heard calling out orders, as if the stage is the deck of a ship and the crew is spaced around it. And, actually, they were. When I did the recording session, I demanded the actual stage. I set myself up with a stereo mic in the first row of the audience. When we played back the session through the house speakers, the room tone of the original recording session added to the stereo field placement of the performers to make a startlingly realistic re-creation of the original actors. When the sound cue played in that black-out, it really sounded like the actors were actually on stage. (I did a similar blackout cue which was done in-studio; I recorded adlibs from the actors, and did an impromptu foley session in the rehearsal hall -- wearing high heels on my hands, even! -- then edited it all together to produce a short script of actors hissing at each other, running across the stage, slamming doors, etc., in a brief scene of confusion. The result did not sound like it was actually happening on the stage in the dark, but it did sound like it had been recorded all in one shot SOMEWHERE.)

The Ghost of Jacob Marley ("A Christmas Carol") -- to give an extra-ghostly quality to his moans, in addition to a reverse-reverb effect, he had a delay unit on bypass. By hitting a footpedal the operator could "capture" the cry at the end of one of his lines and make it echo until the footpedal was released again. It made it possible to add long echoing tails to some of Marley's cries, without making the rest of his speech unintelligible. (Reverse-reverb algorithm was also one of my tricks for the monster in a different show -- on top of a chorus effect set to double his voice in a lower register and flange it a little. The intent was a slightly cheesy monster).

Yopp! ("Seussical") -- to get the effect of the yell that "reverberated across the universe" the actor was pre-recorded, and the sound programmed to pan around all the available speakers (both front of house and in the rear of the audience). The sound effect is triggered moments after the live actor yells, providing an effect of multiple reverberations from all directions. Some of our multiple cast gave consistent enough performances that the effect was transparent. Others, however, managed to record their "Yopps" a half or whole-step down from how they actually performed it, breaking the illusion somewhat.

Walkie-Talkie ("Rumors") -- to simulate an on-stage police officer getting called on the radio, we pre-recorded a voice and processed it into unintelligibility. Then we sent it to the external mic jack of a walkie-talkie in the booth. All the operator had to do was push-to-talk, and then play the cue, and the walkie-talkie worn by the actor crackled as if getting a call from Dispatch. It actually took a bit of experimentation to get a voice that sounded like it was telling the actor something, but from which few recognizable words could be extracted. The first thing we tried was improvising gibberish. None of our voice actors proved capable of doing this without it sounding like made-up gibberish. We next tried scripting gibberish. That also failed; the voice actors stumbled over the made-up words and their vocal cadences became unconvincing. Finally I wrote out the lines needed, abstracting from the script a telegraphic, terse, but still idiomatic set of English phrases. Then I started processing. Heavy EQ. Compression. Distortion algorithms (the same trick as the telephone voice above). But what really did the trick, and made the cues short enough to work within the timing of the on-stage dialog, was using Stienberg's "Time Bandit" to time-shift the lines to about 1/3 their original length. The end result was a comically distorted walkie-talkie voice in which you could still, barely, make out the words "Shots heard" and "221 Park Place."

Wednesday, December 29, 2010

Seeing Sound

I was hanging a mic on a drum kit today and I was aware once again that I was seeing sound.

Not literally, of course. That's called synesthesia (and is a fascinating subject on its own). No, this is more the ordinary quality of being a member of a visually-minded species that tends to organize information in visual/spacial terms.

I visualize the sound field. There is the pattern projected from musical instruments, speakers, and all sorts of other objects. There is the detection pattern of microphones. And there is the quality of the space; the room, and the air itself.

How to describe? The sound issuing from, say, a trumpet is a bit like a multi-colored squid. Most of the sound comes from the bell of the trumpet, but it is a narrow, blunted cone (this would be the semi-pointed mantle of the squid). The center of the cone is where the brightest sound is; as you move away from looking straight down the bell, the sound becomes less bright, with less highs. There are also little arms/tentacles coming off the trumpet at all angles; these are the bits of sound (also very directional) from valve noise, breath noise, and so forth. What a trumpet sounds like, then, depends on its angle to you and how far it is from you.

For some instruments this array of colors doesn't blend into the characteristic sound until you are some distance from the instrument. A flute vibrates itself, and sound also comes out of the holes, but the total sound forms several inches away. A violin even more so; from close-up you might hear the scratching of the bow, the distinct 800 Hz tone of the bridge, the booming sounds coming from the f-holes; or the more characteristic sound coming off the body, but it is from a foot or more away from the face that the violin begins to sound like a violin.

This, then is the art of placing a microphone; to determine what qualities are needed for a particular band, instrument, number, and to place the mic to emphasize those. For a jazz-fusion group I might want the mic a foot from the face, looking flat on. For an Celtic Rock fiddle sound go right up close to the bow. For a string quartet, back off to as much as six feet away looking over the violinist's shoulder and down!

And there is more; as that complex multi-colored shape of sound issues into the environment, it reflects and is absorbed by what is around it. The higher frequency sounds will be stopped more easily, but will also reflect more easily (especially off a hard surface). The lower frequency sounds will pass around smaller obstructions like a ripple in a lake passing around a rock. So this multi-colored, vaguely conical fountain issues from the trumpet into the world and cascades off music stand, floor, the body of the trombone player in front in a spray of color and complexity.

I was placing this mic over a drum set, and it was like navigating a field of multi-colored spheroids. I knew the drummer, and I knew I didn't want too much ride cymbal in the mic, so I tried to move the mic past the sphere of sound where the ride was loudest, and point the mic away from ride and hi-hat. The snare has a character that changes depending on your angle to the head and how close to the rim you are positioned. The toms, also, have a booming body that emanates from the sides and bottom, but a crisp attack coming off the top. A difference of a few inches in the placement of the mic, or an angle of a few degrees, changes the quality of the sound.

And mics are the same way, but in a sort of reverse. Your workhorse mic is a cardiod; that means the sensitivity pattern is roughly heart-shaped (but three dimensional). The mic "hears" loudest directly in front, with the sound level tapering along an Fibonacci curve as you angle more and more away from dead on, until at some point maybe 45 degrees behind it effectively drops to zero.

The mic also has a color. It hears different frequencies more efficiently. The old workhorse the SM57, for instance, has a pronounced "presence" peak; using this mic is the equivalent of taking the sliders around 8 kHz on a graphic equalizer and moving them up six db. What this mic "hears" is bright, in-your-face, with a bit of sizzle. The SM58 (using the same capsule but a different filter design) has a gentler rise closer to 4-6 kHz.

Except this also interacts with the pattern. No cardiod "hears" all frequencies evenly. Like the trumpet, it is more sensitive to the high frequencies that are right in front of it; things to one side of the mic will sound "duller," with more low-frequency content.

And like the trumpet, it has spikes; little lobes out at all angles including behind the microphone where it hears certain frequencies strongly.

Even more, because of the design of the mic capsule, low frequency sound interacts in a very funny way. When you are close to the source of a low frequency sound, the pressure difference between the front and back of the capsule takes over from the pressure wave that ordinarily drives the capsule. Within a few inches of a sound source, frequencies below 100 Hz are boosted sharply in something known as "bass tip-up" (or more technically, "proximity effect.")

Moving from the small scale of the mic capsule to the large scale of the performance space, the entire room can be treated as a musical instrument. Just as every woodwind generates sound through a column of air within which travel pressure waves that are harmonic multiples of the length of the tube, a room itself has compression waves bouncing from wall to wall; standing waves at every harmonic multiple of the distance between the walls.

As the air temperature and humidity change, these characteristic frequencies change. And depending on materials in the walls, placement of walls in relationship to each other, and of course sound sources within the space, some of these modes may be driven strongly. Just as a trumpet player concentrates to make the column of air in the trumpet vibrate strongly at the desired note, the air in the room itself will begin to generate a strong tone.

All of these multiple and intersecting pressure waves interact, of course. Depending on frequency (worse, on the harmonic relationship!) and phase, they may positively interact to make one frequency stand out, or destructively interact to make one frequency quieter.

Take the simple case of a singer in front of a music stand. Her voice enters the microphone, but it also bounces off the stand and arrives out of phase at the same microphone. If the mic "sees" the stand too well, the two signals will mix, and some frequencies will be in phase and boosted, and others out of phase and squashed. The resulting sound is as if you took your graphic equalizer and randomly moved some of the sliders to the top and some to the bottom -- hence the descriptive name of this effect, "comb filtering."

To combat this, you tip the stand, or you angle the mic. If the side of the mic "sees" the stand, not only is the level lower -- and the destructive interaction less -- but the lack of sensitivity to higher-frequency sounds at the side of the mic means the comb filtering mostly happens out of the frequency range of her voice.

In other cases, you might chose to move the mic very close to an obstruction -- as I sometimes have by pointing a mic directly at the lid of a grand piano from under an inch away. The length of the pressure wave at 8 kHz is 4 centimeters; that means if the mic capsule is only two centimeters away from the reflective surface, destructive phase interference is well above the fundamental and up where it doesn't really hurt the sound.

So this is how I see the room within which I try to work the art and science (more art than science, and more guesswork and compromise than art!) of sound design. Loudspeakers beaming a range of frequencies into the room, with boomy sound issuing off the back end and gliding around obstructions and through doorways to fill the space; high frequency sounds squirted out in a line of sight, to hit turbulent air and be diffused, mid-range sounds to strike a wall and bounce back setting up a powerful room node at thirty cents flat of Concert A.

Into this room, voices and acoustic instruments also send their sounds, to reverberate and combine and mix. And microphones attempt to navigate the mess, with what they hear being selectively tailored then sent back into the confusion in hopes of enhancing certain needed elements in frequency or time domain.

It is a constant battle between the different needs of relative volume, placement (the sense of where a sound is coming from), frequency content, and intelligibility (the quality of vocal material that makes it possible for human ears to extract understandable words from speech or lyrics). In this mass of compromises, it might be necessary to make an upright bass too "loud" because the indirect sound is too low and too late to give the defined rhythm necessary to support the musical material -- so you mic it just to pop the crisp sound of the attack and define that all-important beat. Or you might end up putting it too loud because the dancers need to hear it to keep in time!

A final thought. The microphone has a huge disadvantage over the human ear, a disadvantage that is at the heart of the difficulty in making a mic'd instrument sound like a "live" instrument. And that is that the mic is in one place. There's only one of it, and it is fixed in position. When we are in an acoustic space ourselves, our ears frequently find themselves in a room node or some other place where frequencies are destructively combined. But we move. Every tiny unconscious motion of our heads shifts the position slightly, and changes the combination of waves mixing and entering at the ear. And we have two ears, as well, each hearing something different, and each making nice little calculations about distance and position that also help us to unconsciously compensate. We sum over time, and we compensate through years of instinct, so a live trumpet in a live space will usually sound like a trumpet. It is the microphone, that lacks any of this subjective adjustment but can only listen objectively, that finds in each set-up, each different night of performance, each note of each tune, some different and incomplete picture of the complete sound.

Tuesday, December 28, 2010


The last microphone order I put in, I added a clearance-priced soprano uke "set" as an impulse buy. I have been thinking about adding a string to my collection of instruments, but hadn't been actively looking (okay...sometimes I dreamed of an erhu...)

And I LOVE the instrument. Not so much the model I just picked out, although after sanding down the nut and letting the strings settle in it became quite playable (I've got some Aquilla strings getting shipped from Hawaii...those should improve the intonation and tone.)

I'm already thinking of moving up to a nice Lanikai. But, really, I need to keep at practicing. I'm still working on getting the changes smoothly on the first strummed song I'm learning, and I'm even further from getting through the full song without a stumble on any of the finger-picking melodies I've been working on.

Discovered several sites for tablature, foremost being Ukulele Hunt (for just plain being a fun site) and Tropical Storm Hawaii (for being a giant clearing house of tabs). And I discovered reading tabs was easier than reading sheet music -- I almost sight-read through the first tab I tried (a single-note version of the theme from MASH).

So the last several days, since the UPS guy came by, I've been reading about ukulele, listening to ukulele, practicing ukulele. The lighter strings are a bonus to me over guitar; even with carpentry and rock climbing to toughen them my fingers refuse to callus properly. The small size of the soprano uke is a bit of a challenge, but then, I also play sopranino recorder (and I've tried a garklien, which I can actually finger with difficulty).

I can't express how much I love this instrument. The tone is charming and wonderful, with a real naturalness, a wide expressive range, and all those humanizing bits of noise (clicks and scratches and squeaks) that get left out of keyboard synthesizer playing. The instrument is flexible and challenging, with all manner of different tone production and harmonizing and musically useful ways you can use it. And it's extremely portable (and cheap, too, so you don't have to be scared toting it around with you.)

Me being the way I am, although I'm still stumbling on the F to Bb change, I'm already trying out hammer-ons, tremelo, the "ras" strum borrowed from Flamenco. But then I think that's a good way to learn; always have at least one thing you can barely do at all, and as you get it, it brings back technique you can use to improve the things you merely do badly.

Since I do have some improvisational background, I'm not restricted to following the tabs I find as written. I am freely changing to alternate fingering when it makes it easier for my current ability to handle chord changes...but also adding in flourishes when those fall within what I can do that I find musically useful.

Pretty soon I'll need to stop practicing all day, and get back to the paying work. But for the moment...!

Monday, December 27, 2010

The Basics of Sound Design II

Let's recap.

Among the tasks of the Sound Designer, the most time-consuming creative task is finding and/or creating the sound effects and transitional cues that will accompany, enhance, and frame the action of the play.

How much sound can do, and how it will do it, depends on the script, the particulars of the production, and how much you can talk the director into it (!)

When we talk about technology, the central question is one of reproduction. How are the sounds to be cued, created, and sent out into the space? A useful distinction divides the possible sounds into three broad groups: the first is practical sounds; crash boxes, starter pistols, percussion players, other mechanical devices which make sound. The second is pre-recorded sounds which are stored on electronic media and played back. The last are hybrids; cues which may be implemented electronically, but are triggered by other than a traditional "called" cue.

Working from the inside out, the traditional "tape" cue is on media like CD or hard disc (and, at a pinch, iPod). Each individual sound effect or operator action is given a cue letter, and those letters are placed in the Stage Manager's book. When the correct moment comes up in the play, the Stage Manager will "call" the cue over headset; "Sound Cue H, Go!" The operator then pushes whatever button makes a sound play (or fade, or stop, or whatever is called for by that cue).

In a typical theater you have a deck or mixing board that is connected to several amps each routed to various speaker options. You can thus send different sound cues at different volumes to different speakers, changing their character; the pre-show music might be out the big speakers, for instance, but the Victrola cue out a speaker hidden inside the prop itself.

In many theaters this is still CD players connected to a mixing board, but increasingly computer-based systems are taking over. The system I use now is a Mac laptop running QLab, a shareware sound playback software; this in turn connects to an eight-output Firewire audio interface, allowing me to program which set of speakers each cue is sent to -- and even change this during playback with another cue, allowing complex surround sound effects to be programmed in.

The other advantage is it does not require technical skills on the part of the operator; they only have a single mouse-click to worry about, whether it is as simple as playing a single gunshot or as complex as fading the pre-show music, starting an Intro, and cross-fading the Intro into an on-stage Victrola that continues to "play" the same piece of music.

Mechanical effects are still alive in theater. Door slams and gun shots almost always sound better if performed live. For the latter, particularly if it is on stage and visible; tape sounds never quite coordinate properly with the firing of a prop gun!

Sometimes the mechanical sounds will be a design decision. For Mystery of Edwin Drood, for instance, an entire second orchestra of wind machines, coconuts, slap sticks and other Victorian-age mechanical sound effects is called for, performed in full sight of the audience.

A similar solution is to have the percussion player perform certain effects. The reasoning is that if they happen in a specific musical place, a drummer or keyboard player is better placed to get them on the beat than a distant sound operator who is in turn beholden to a Stage Manager. And then, in addition, the stylized nature of percussion toys instead of recorded effects often lends itself better to the feel of certain productions.

I mentioned in the previous entry the use of a live, and ear-splitting, factory whistle in Sweeney Todd. Actually, though, the last time I did the show, the live and very real air-powered whistle wasn't ear-shattering enough, so we amplified it -- making it a kind of hybrid sound.

In the category of hybrid effects are all sounds that include some electronic component, or pass through the sound system, but are not explicitly cued as a Stage Manager called cue. A live backstage microphone for the other end of a telephone call, for instance.

I find that doorbells are best when the actor at the door presses the bell themselves. The simple form of this is, well, a doorbell. But the usual electric door bells are often not loud enough, and not clear enough, to work on stage. So you amplify it, or you use a pre-recorded cue that is triggered by the actor; they still press a button to ring the bell, but the resulting sound is different.

This is another place where QLab works so well for me; since I can trigger a cue via MIDI, I can control cues through any device that can output a MIDI event. And since I built my Arduino-based trigger-to-MIDI device, I'm able to create those events with something as simple as a doorbell button, or as complex as a proximity sensor.

New tools are opening up what used to be the closed boxes of "cues" or "sounds," allowing a design to make changes in real time based on actions on stage. As a simple example of this, QLab will play as many simultaneous cues as your hardware can manage. Instead of being forced to think one sound = one event = one cue = one CD track, you can have entire sequences of sound that are built up from different simultaneous events. You can also indulge in more random-access sounds, instead of having to have all cues locked in the order of a CD playlist.

On a recent show, I had a train cue which combined the sound of the engine and track noise (one sound cue to start, one sound cue to fade down) with a hot-button cue to play the whistle. I could manually add in as many whistles as I wanted, whenever they "felt" right in the action, during the performance.

(We had an even more elaborate train during a later show; a selection of "chuff" sounds were assigned to a MIDI keyboard and the Music Director was able to perform a simplified train IN TEMPO over a musical number!)

That's the basics of reproduction, at least as to be covered in this article. The next important bit of technology is paperwork.

In the previous article I mentioned the Show Book. This is a three-ring binder that contains a marked-up master script with all cues, as well as all the other paperwork you need to refer to in making the show happen.

In this paperwork is your Cue Sheet. This gives a designation for the cue, description, page number, notes -- and on older playback systems, details such as volume, speaker assignments, source, processing.

These days, most volume and assignment is programmed into the playback itself (a computer), and the processing is similarly built into the cue.

I should note; my Cue Sheets also indicate non-called cues, like this;

D Victrola (spot) pg. 24 ///VISUAL Mary turns crank, out with Cue E
N/A gunshot (live) pg. 26
E Victrola Explodes (spot) //IMMEDIATELY on gunshot!

The Cue Sheet is your communication and discussion sheet with the Director and Stage Manager; you give them copies to keep them informed of your plans, and date them so updates can be kept in order.

(Because of the ongoing nature of this dialog, my early cue sheets contain no information about playback, but copious notes on the concept of the cue; "D (Victrola) pg. 24; 'Dance of the Sugar Plum Fairies,' processed to sound like old-time recording -- I don't think the Al Jolson will work here." As the show gets closer and the cues begin to appear in rehearsal, I cut out the long descriptions and the side discussion.)

The next piece of important paper is internal; it is your pull list. This is where you figure out what the actual sounds are (as opposed to the sound EFFECTS, which are built up of multiple sounds), and start scribbling ideas about where to find them. My word for these, borrowed from music-sampler terminology, is "partials."

The Workstation. These days, it's all on computer. The early sound designs I assembled on reel-to-reel, dubbing from tape to tape, from record to tape, sometimes adding layers from an old sampler workstation as well. Then came the computer -- and almost instantly, DAW software; software that allowed you to do multi-track editing.

My primary composition tool now is Cubase, a music sequencing software. With the plug-in architecture, the digital audio processing tools, and the multi-track editing, not to mention the seamless integration of MIDI layers, it is the tool in which I create all but the simplest effects.

But the first step is collection. I collect voraciously. I don't just pick "A" horse. I go through my libraries and pull every horse whinny that sounds even close to what I'm looking for. I import them all to the laptop's hard disc, establish a folder and file structure for the show, and then audition them over and over.

This is where the pull list comes in so handy. You pull from libraries, you purchase, you set up Voice-Over recording sessions, you set up foley sessions and set dates for location recording. As you collect material, you mark it off the list.

The great advantage of the laptop is that I can test sounds in the actual space. I can connect to the sound system and play them through the actual speakers the final cue will be played on. I can listen to what it will sound like to the audience. I can try them out live in early rehearsals, and I can try out ideas on the Director. I can also make changes very, very quickly.

For some shows, I am able to try out different ideas during rehearsal. Play one sound, and when it doesn't work, or the director doesn't like it, try a different one. This way I can work in ensemble to narrow down to the sound that best fits the actual production; not have to work at home and hope the final result will fit in.

(One show I came on at the last minute. I actually worked out of iTunes, taking advantage of the large library of period music I'd stuck on my laptop's hard drive -- throwing out idea after idea right there during the rehearsal, sitting at the tech table with the director. In a couple of days, we had roughed out the music cuts that would work for the show.)

One of the related tools the multi-track editor gives me is the ability to use a reference track. If there is a complex effect or sequence of sounds or musical underscore that has to go under a specific section of dialog, I can record that very dialog in the theater, with the actor's own in-scene timing, and import that into CueBase.

I did this for the final "Movie" scene of Play it Again Sam, for instance. I recorded the scene between "Bogart" and "The Girl," and brought it into CuBase along with several tracks of music from a re-recording of Alfred Deutsch's original music for The Maltese Falcon. Then I carefully assembled a careful cross-fade of various bits of the music to follow the contours of the mini-story being played out. The actor in this case was grand; he stayed within a couple of seconds of the original tempo throughout the entire run.

(In many cases you can not trust the actor to remain that on-tempo. If you want to cut sound or music this close -- the technical word for it is "Mickey-Mousing" -- you need to build in devices to allow either the actor to follow, or the operator to adjust. I'll say more on this when I talk about composing for the stage.)

Again, the great advantage of doing this on laptop is that you can tinker with the cue over live rehearsals, and even into Tech Week -- the multi-track sequencer, with all the built-in EQ and reverb and so forth, plays back with a single button press, so you can take the complex cue yourself until it is finally nailed down -- at which time you finally render, stick on a thumb drive, and drop it into the waiting slot on the programmed show.

I do small amounts of tinkering with Audacity, and a few other processing tools, but CuBase and a host of VST plug-ins have most of the processing I need. Increasingly, I am spending time not just picking the right sound, but processing the sound to bring out its character. I find increasingly I need not just "the sound of a horse," but a one-second sound bite that screams "Horse!" to anyone that hears it. Placement helps. Picking the right samples helps. But sitting in the theater dialing the EQ to bring out the character of the sound helps a great deal.

In many cases a little more flavoring is going on. Say I've got a voice on a phone. On the hard disc is the full session with the voice talent. First task is to identify and trim and label the takes. Then audition them. Then, when I have the take that works, I bring it in to CuBase.

Process with compressor and EQ to mimic the small dynamic and frequency range of a telephone. Add noise to taste, from guitar amp simulators. Go into the library of sound effects for dial noise, hang-up noise, and similar, and make a multi-track cue that combines all of these into a little story about a phone call.

Often the vocal recording will need minor surgery. Maybe he got suddenly louder at the end. Compression won't fix it; you have to go in manually and draw an volume envelope or record a fader move. Perhaps she popped some plosives into the mic. Zoom way in to sample resolution, and trim the clips down. Maybe there are long pauses, or extra "ums" and "ehs" in there -- trim up, perhaps going multi-track so you can work trial and error instead of committing to a specific edit. Perhaps he even missed a word; with luck, you can cut in one, or even a single syllable, from a different take.

Within a program like CuBase, everything is run-time. You can make a trim, then un-do it. You can add reverb, then dial it down when it doesn't sound right in the theater. Unlike the old tape days, you can always go back, right back to the beginning if need be.

And there's another thing you can do. With certain kinds of complex sound effect, a number of different things might want to happen in some relationship with the action. Say a car passes right to left. Say a series of explosions occur. Say a nervous horse is pacing and whinneying during a dialog sequence. The trick is that instead of mousing each sound one by one in off-line mode, and playing them back to see if you placed them right, you assign them to a sampler, and then you perform the sequence on a MIDI keyboard.

This is incredibly powerful. The last few train effects I did, I created the playable train and literally played a leaving-the-station and arriving-at-the-station on the keyboard. A second and third recording pass for whistle and bell, then into the editor to add track noise. And then overall processing to add a little doppler shift, and overall volume and pan curves.

Real-time performance is a powerful compositional tool.

The last of the basic tools is of course the most important. And that is your own ears. You need to be listening constantly. What do things really sound like? What are useful sounds happening around you that you could capture and use? Each show will and should draw upon your ability to research and absorb and understand the distinctions between types of trains, between what different winds are "saying" to you emotionally, what is and isn't in period, or what will or won't seem in period to the audience.

As in the sciences, you need to develop a deep mistrust of your own memory and senses. Don't think you know what something sounds like. Know what it sounds like; go back and listen, trying to clear your mind and hear what is really there. Then take your knowledge to construct something artificial that will fool the audience the same way you were once fooled.

And that's all I have to say tonight.

Sunday, December 26, 2010

Everything I Need to Know About Life I Learned from "High School Musical"

You may have heard something of this Disney product. It's the 800-pound Gorilla of youth theater these days. Every theater program ends up doing it -- even though the directors, music directors, and all the experienced theater students hate the beast, the younger kids want nothing else and cry and pull out of the classes until the program caves in and lets them do that show. It's Disney Crack for preteens who have grown out of Disney Princesses.

Anyhow, in the spirit of several popular books, here's the list of what I've learned about the theater and about the world from Disney's High School Musical.

10: Bring your leading man (or leading lady) to auditions. The gang at the table isn't there to work up a cast; they want you to come already paired up.

9: It's required to sing a song from the show at open auditions. Bad luck for you if you don't know it already. On the other hand, for callbacks you can sing anything you like. It's fine to bring your own backing tracks to auditions, too. The supplied pianist is just for the poor people who didn't think of getting their friend to make something up in MIDI.

8: We don't do scenes at callbacks or auditions either. What, did you think this was theater or something? Put those monologues aside -- just sing.

7: Voice training is for sissies. It's okay to practice your basketball game, though.

6: In fact, there's nothing in theater that actually requires training. Raw talent can substitute. A good face and a song in your heart trumps four years of studying acting, memorizing scenes, learning dance, training your voice, etc.

5: Actually, nothing requires training. Okay, maybe basketball. No need to actually study anything. The people who are good at a subject are boring. Good people, the right kind of people, just have to have heart. Then they'll do better at something than those who have made it their life.

4: Nerds are weird people who know all sorts of science and history, AND like to read all the time. No, there's no connection. The books are just a nerds thing, like the glasses with the tape on them. Nerds come into the world already knowing the speeches of Churchill, the poetry of Dickinson, the length of all the major American rivers and the entire Periodic Table. No, they don't specialize, either.

3: Clubs, in fact, are for people who already know the subject. No-one ever comes to high school to learn something new. No-one joins a Chess Club wanting to learn chess. No, everything is status quo. Clubs, and classes, are just handy boxes for people who are already good at a subject (but see above!)

2: But what did I say about the specific skills of the theater craft? There aren't any. Anyone with a good face and an open heart will make a great actor. All the other departments; lead stitchers, charge artists, sound engineers, directors, scenic designers, electricians, musicians, choreographers, all wanted to be actors but failed. So the fresh-faced kid off the street could do a better job designing the lighting plot of a Broadway Show than the embittered forty-year-old professional designer, too -- it's even easier than being the leading man (or leading lady).

1: Never give up your dreams. In fact, never give up anything. Sacrifices are for wimps. Just procrastinate and sigh and cry and eventually the world will figure out a way for you to have your cake and eat it, too. There is never a need to make a choice, especially a tough choice. Just do everything you want to do, and make the rest of the world get out of your way.

The Basics of Sound Design

This is going to be a brief (I hope!) essay on the general process from being hired as a Sound Designer to Opening Night.

In previous essays, I've said that a Sound Designer can and usually should be worried about the total sound environment of the play; about how the orchestra sounds, about vocal reinforcement of the actors, and even about support services like backstage monitor, paging, and communications systems, lobby feed for late-comers, recording the show for the theater's archives, and hearing enhancement system for the hearing-impaired patrons. This essay, however, will concentrate on just the core; of sound effects and the basics of musical cues.

It always starts with the script. In the movie business, you've got the original script, and then there's the shooting script. In the comic book world, outline, then pages. In the theater there is just The Script. Everyone gets the same cuts marked, and prints it out so the page numbers are identical. You may read the play in a booklet, or off a pdf, but when you go into rehearsals you have a printed script on 8 1/2 x 11" paper and the same page numbers as everyone else.

The script is the click track, the SMPTE code, the index, of the entire production. Every event that happens over the course of the performance will relate directly back to it; "And we dim the lights on 'MCKINNLEY: And so it goes,' near the top of Page Four."

(When I'm working a musical, I often request a score as well. This is almost always a reduction of the orchestration for rehearsal piano, with lots of little notes about what the actual orchestration is at that moment, and includes all the vocal material arranged in staffs above the piano part. The location system is rehearsal numbers, followed by bar numbers. A good musical will have rehearsal numbers at every major change of singers; #13 when THE CAT begins "It's Possible," #13-A when JOJO and the FISH CHORUS join in.)

This is what the professionals do; stick the script in a large three-hole binder, put in dividers and clip in notes, minutes from meetings, contact lists, cast lists, schedules, and every other bit of paperwork from the show that will fit comfortably in a single binder. This becomes, then, your "Show Book." Stick a lobby card in the front of the binder, because if you are designing for a living you will be doing more than one show at the same time -- with one Show Book for each production.

You take that Show Book to every meeting, every rehearsal. That copy of the script is your master copy; you flag the pages for every cue, you highlight, underline, pencil -- whatever you have to do to indicate what is happening, on the page and on the WORD of dialog or MOMENT of action when it happens.

So you read the script. Several times. The first just to experience it and to get general impressions. How does it flow, it is fast, slow, funny, sad, etc. The second you start paying attention to how sound is used and how it might be used.

It really depends on the show and the script. Some scripts will indicate "bell rings," often in italics like the blocking notes. Shakespeare, of course, says practically nothing ("...dies.") Some will go to the other extreme; Oklahoma! indicates EXACTLY what animals are heard prior to "Oh, What a Beautiful Morning," including the notation; "...a dog barks two times then stops, contented" ?!

But most scripts won't list all of even the bread-and-butter sounds; the doorbells and phone rings. You have to go over the script carefully, picking up where a phone is described in dialog, "Aren't you going to answer that, Phil?" or where it seems obvious there should have been a doorbell, "Come on in, Dolly! And is this your new husband?"

It's not hard-and-fast, but I break down sounds into roughly three groups; Spot Effects, Background Ambiance, and Transitional.

Within the spot effects falls a sub-group I call "Rehearsal" sounds; they are sounds the actors need to directly react to. If you attend an early rehearsal, you'll see exchanges like this; (John) "Is Rob going to be okay with Sue leaving him..." (Assistant Director) "Bang!" (John) "Christ! That was a gunshot! Rob? Rob?"

Other spot effects just "happen." They are part of the reality of the stage; someone is running a vacuum cleaner, or flushed a toilet, but no action or dialog is dependent on them. Thus, they can be introduced later in the rehearsal process.

If the spot sound is, say, a tune coming from a radio or phonograph the actors have to dance to, to sing along with, or otherwise interact with, the real thing needs to be in rehearsal no later than two weeks prior to Tech. The actors will need time to work with it, to adjust their performance to it.

There's no hard-and-fast rule on whether a sound needs to be in rehearsal, and how soon. Underscore music, probably. Ambient sound effects, probably not. When in doubt, ask the director.

So you read the script, and started to form ideas. Then you met the director and got their impressions. And you saw what the rest of the design team is doing. Are they going for realistic period piece, or highly stylized? What period exactly, and how precise or generic is the date and location? What is the energy, the mood, the brightness...all these qualities that make up the "feel" and "style" of a play.

So you come up with ways in which sound will enhance the production. And you sit down with the director. And here comes a potential show-stopper (for you, not for the show.)

Many directors don't "think" sound. They go through blocking, readings, rehearsal, and so forth, and in their minds a phone might ring now and then, but that's it for sound. Sound is added on, in their minds. It's like chocolate sprinkles on a donut.

For these directors, you will have a hard fight suggesting that there should be a little birdsong in the morning, some crickets at night. And they won't grasp at all your desire to make the phone ring have an eerily vocal quality to it to telegraph the news of Victoria's ghostly re-appearance, or the subwoofer rumble at the threshold of hearing, making the approach of the soldiers even more frightening.

Those are the directors that will get talked into "letting you try" a sound, which then through rehearsal (without regards to what it would sound like in final performance with a full house absorbing sound, and the actors projecting at full performance volume), gets run lower and lower and cut shorter and shorter until it no longer makes any sense as a cue.

Fortunately, more directors are seeing that sound can enhance the production in the same ways that creative use of lighting can. However, lighting has the advantage on us sound designers, in that directors respect it without thinking they understand it. They will give lighting designers room to achieve effects without overly questioning how it is supposed to work or whether it is "believable." They rarely give sound designers the same benefit.

Again I will repeat something from the previous essay; everyone in theater (thinks) they know two things; their own job, and sound.

Spot Effects, as I said, are basically those things that happen at a specific instant, and usually relate pretty directly to the action on stage. The prototypical spot is the telephone ring. Another is, say, the sound of a car pulling up outside. The actors don't react until the doorbell rings, so the timing is up to you. But you still place that car effect at a specific moment in the play to make it work within the rest of the action and the flow of the story.

Ambiances are sounds that are ongoing through a scene. They are there to set the reality of a place, a time, or set a mood. Birdsong, traffic, the sound of the river, the engines of the ship. Often, the best way to use ambiances is to start them louder and more "active" (more elements, more rapid changes) when the scene begins, and then take them down to a softer level. This mimics how human perception works; when there is a constant background sound you tend to tune it out. If you open with a detailed picture of the river lapping against the quay and the slap of rigging on the ships moored nearby, you can after a minute or so bring it down to just the soft rushing of the river and not compete with the dialog of the ongoing scene.

Combining this with spot effects is the best of both worlds; add a spot effect of river gulls and play that in the long pause as Sir Thomas thinks about his next words. So you've brought the river back into the minds of the audience, but you haven't distracted them from the important dialog.

Ambiances do not have to be realistic. For my design of Agamemnon, in keeping with the director's image of a post-apocalyptic world, and the producer's desire to have energy and movement over the sometimes dialog-heavy scenes, I had ambiances of some sort of brutal industrial ruin of a landscape; wind, the chatter of a Geiger counter, metallic clicks and clanks, raspy synthesizer pads, and so forth.

Transitional Effects are those that happen outside of the action; scene changes, entre-acts, and so forth. This also can include the use of pre-show material.

When I did Mr. Roberts, I had a mental picture of an onion in regards to the various sounds on that show. The outer layer was the reality of the theater and the audience. We had some lobby displays of period material, but they were museum-like; our perspective was that of the time and place the audience existed in. Within the auditorium, I was in period, but not specific to a time or place; my pre-show selection was music and vocal material presented as if we were listening to short-wave broadcasts. When the lights went up on stage, all the sounds there were within the reality of the ship; they were "Source," also known as diagetic; if we heard music, it was because "Sparks" had turned on the short-wave and hooked it to the ship's PA.

To my mind, then, what happens between scenes is always slightly "outside" the world and the moment of the play. That is why in even the most realistic play, music can still appear in the transitions between scenes.

Often, the transitional material is music, but it can be sound just as easily. In How To Succeed In Business one scene transition is a phone call from Mr. Biggley to Personnel. In Sweeney Todd, the first thing heard is a factory steam whistle.

So you've marked up your script, talked (fought?) with your director, and worked up what you think the sounds should be. Now the best planning tool is a cue sheet. Stick some letters on them, so you don't have to be going "No, the second telephone ring during scene two" but can just refer to "Cue C."

After several years of experimentation I've developed a scheme of letters in groups to organize cues; if a train whistle is heard three times during a scene, and a creek is trickling away underneath the entire time, I'll identify them as;

D Creek (AMB) pg. 23
E2 " " pg. 24
E3 " " pg. 25
F Gunshot (SPOT) pg. 30

In the final cue sheet these will have target lines, when to fade them, and so forth as well.

A different list is your master "pull" list. In that list, the train whistle only appears once; you will be simply playing it three times. So that list looks more like:

Train Crossing (construct from partials):
track noise (Vintage? 101 Trains?)
bell (play from GPO)
(train whistle)
Train Whistle (Vintage, track 15, process for distant sound)
Normal Phone (use the one from "Business")
"Crazy" Phone ("Business" phone, ring mod, maybe combine with scream?)

Take note of the "Train Crossing" cue above. In some cases, the cue will be the same as the pull. In others, the final cue as played will be a combination of other sounds. Instead of looking for the perfect "Background sounds of Civil War Battle" you look for cannon, shouts, rifles, bugles, horses, etc. and then slowly build a multi-track cue that combines them in an interesting way.

Assembling a sound from partials gives you much more control over period, specificity, distance, business. When you've assembled a cue from bits and pieces you can start it thick and thin it out after establishing it by taking out elements. And because you are constructing it like a song, you can make it any arbitrary length without having it sound like it is looping.

Some of the most fun cues are these combination sounds. Sometimes these will be what I call "Story-telling" cues. Recently I had to throw together the arrival of the General's Motorcade for a production of Kiss Me, Kate. As I constructed it mentally, the motorcade approaches from Stage Left; a car, and two motorcycles in front. One of the bikes shuts off before the car pulls in, but the other wheels around to find a better parking spot and idles for a bit before shutting down. Once the car stops, the chauffeur gets out, closes his door, then opens the door for the General. So those are the sequence those various elements happened in. The audience never saw any of it. The final cue was a single cue; a single "sound effect." But a little story was being told.

It is times like this that sound becomes another actor, truly. Up until quite recently, however, this was an actor who was not very flexible. He always read his lines exactly the same. Now technology is finally catching up to where sound can breathe with the performance; music cues can have a slightly faster tempo, or take a vamp or fermata; all sorts of things can evolve organically as the performance plays out. It takes, however, an operator with good musical ears!

So you've got the pull list. Some sounds come from libraries. After a while you'll have a few old favorites that just work so well you keep turning to them. Then there's the sounds you collected or built for a previous production that you can recycle. Then there's stuff you don't own. You can shop online these days -- Sounddogs dot com is a GREAT resource for royalty-free sound effects. And there's making or recording your own.

The technological options are always increasing. Back in the tape days you could do a lot to alter a sound to taste. These days, software is so easy and so powerful the majority of sounds you use will have been edited and processed in some fashion. More on that in a moment, but the reason to bring it up now is so many sounds may be something else that's been altered to fit.

As an example, in Seussical! an elephant-bird has to hatch from a large egg. I hard-boiled an egg, peeled it in front of a sensitive microphone, then changed the pitch to make it sound bigger and followed it with a processed bird tweet combined with a re-pitched elephant's trumpet.

You have to develop a sound designer's ear. One part of it is splitting what other people might think of as a single "sound" into the elements that make up that sound. What is "the sound of" a motorboat? There's the engine, there's the exhaust burbling away under the water, there's water slapping against the hull in the bow wave, there's some creaking from the fiberglass hull. When you split up a sound like this, you figure out how you could take sounds you have or can record yourself to assemble them into the sound you are looking for. By changing the ratio you can change the character. And by leaving out elements and focusing only on the most characteristic you make the sound simpler and easier for the audience to identify.

You "sell" sounds the same way. Rain never sounds right as a single sound. Combine a couple of different rain effects; some rain falling on foliage, some water running in a gutter, a little distant rolling thunder. A gunshot never sounds Hollywood by itself; mix in a bit of a cannon at low volume to beef it up, add a whip for a slightly snappier attack, stick some reverb on it for ambiance. And a train rarely sounds like a train...until you add a train whistle.

The most frequent recording task you will have as a sound designer is voice-overs. Whether it is the voice of Florence Ziegfeld interrupting the action during The Will Rogers Follies or the voices heard over the intercom in a key scene of Moonlight and Magnolias, or the diary entries read by Miss Frank herself in the scene transitions of The Diary of Anne Frank, you will find yourself recording voice talent.

My preference is to have them live whenever possible. There is so much an actor on a microphone can do, that pre-recording them is a shame. But in many plays, the vocal character never shows up. So it can be a great opportunity for an actor, or a friend of the theater, to come in for a single afternoon and still be part of the show. But you must involve the director, because even more than a sound effect, such a voice performance becomes another actor in the play.

And this essay is long enough for one day. Next, I'll detail the tools I am currently using, and how I use them.

Saturday, December 25, 2010

Six Simple Rules for getting a good voice-over

Whether you are recording sound effects for theater or a video game, there are a handful of basic techniques and tricks that can help you get a better sound from your voice actors.

1) Get an actor. A voice actor is preferable. Random friends and people off the street may have an interesting sound that catches your ear (I really want to record the guy that sold me my previous car -- he has this wonderful combination of Russian accent and street hustler smarm that makes you want to ask if he can smuggle a nuke for me out of the former Soviet Union.) But they don't have training; many of them will choke up on mic, few can do a cold reading off a script, and fewer still can repeat or adjust their vocal performance.

An actor has the chops to pick up the script, fall into character, and read, and do it again and again on cue. And a trained voice actor? They can do incredible things. I worked with a guy once recording a radio announcer introducing a radio soap opera. I asked him for "Can you do the same thing, but with a little more Latin Lover and just a touch of condescension, but also speed it up a little?" And he did!

An actor will also show up on time, prepared, and will stay for the scheduled length of the session. Friends have a tendency to show up late, and get quickly distracted. Amateur actors are just fine, and they all over; in the schools, in community theater programs. It isn't hard to find them and if you pick up one through your social networking you might not even have to pay them.

2) Get a director. Just as an actor is a professional at saying the lines, a director is a professional at knowing how best to say the lines. A director will find the best drama in the lines. A director will inspire the actor, pushing them, working with them, responding to them. As you are huddled back by your mixer with your headphones on, the director becomes your point man, holding the talent's hand and keeping them focused.

If you are creating material for a specific play, having the director of that play in the room is practically mandatory. If the actor is in the show and pre-recording a line for some technical reason, the director needs to be there the same way she would be for any rehearsal; to make sure his character and performance stays within her intents as a director. If the actor is not in the show, a key vocal performance still becomes a character on stage. The director, again, choses to direct this unseen and pre-recorded character in the same way she would direct any live actor.

3) Print the lines. Double-spaced, generous margins, no smaller than 10 pica. An experienced actor will mark up their script with the necessary breaths, with accents they want, with the pronunciations required by you and the director. Use multiple sheets of paper as necessary so that each separate "shot" you mean to record will be on a fresh sheet. Do this even if it ends up you only have six words on a single sheet of paper.

4) Make the talent comfortable. Give them time to warm up, make sure they have water, enough light, etc. Encourage them. Give them feedback; react to the performance, talk back to them. When the lines come in context (if they are alternating parts of a conversation) have someone read the lines they are responding to.

5) Physicality. If the voice over is of a person sitting at a desk, then let the talent sit. Otherwise, encourage them to stand. It shows up in the voice! If you are trying to get a sound of someone in action, have them move. Have them act out, to within the limits of good vocal production and the needs of the microphone.

There is a great story from the conductor of the orchestral and choral material which was made available to designers of the Lord of the Rings series of games. He couldn't get the dwarf song to sound right until he thought of asking the chorus to rock back and forth from one foot to another. As he put it, it transformed the sound from an okay choral performance to the doleful working song of the dwarves laboring away their endless hours in their mines deep under the earth.

I once recorded a brief snippet of sailors pulling up the sails on a clipper and I had the actors join me in miming the pulling of a rope as we grunted out "heave-ho!" in unison.

This extends even to the choice of microphones. If you want a man on a telephone, then give them a mic they can hold and move in close on. If you want the sound of a person shouting across a chasm, stick the mic six feet away and MAKE them shout to you. This is one reason I love my large-diaphragm condenser, and my fishpole boom; they let me move the microphone out of the actor's eye line and force them to speak to ME, not to the mic. It makes a world of difference in the sound.

But I recorded all the ship's announcements for "Mr. Roberts" on a vintage "lollipop" mic I found on eBay and re-wired. To get sound out of it you had to grab the mic in one hand, lean over it, and speak LOUD. It was the perfect sound; even better, all that handling noise came through and sold the idea of a ship's announcement system being used by the crewmen even better.

6) Environment. You don't need an acoustically perfect space, but you do want to control it. Most of the utility rooms and empty rehearsal halls they will give you will have too many reflective surfaces. All that slap-back will show up in the final recording; no matter how much you try to hide it, the sound of a voice shouting across the empty desert will still sound like a voice shouting across a room.

I've made more than a few recordings in theater lobbies, as bad as most of them are. My secret weapon is costumes storage rooms; all those racks of clothing are almost as good as foam acoustic panels. A large cluttered space, like a scene shop, will do the same; all the old props and tool cabinets and bench tools break up those reflections and hide the nature of the room.

When the stars are right you can use the room's acoustics to your advantage. The best use I have made of this is simulation of space; I've recorded actors on the very stage we are using for the production. Played back through the house speakers, the re-creation of a phantom actor is uncanny. The human mind is very, very good at picking up clues about the shape and size and wall treatment of a room from the subtle reflections of sound. When you have that actual room, you can use this. Consider, if you want to record an actor in a car, sticking them in a car to make the recording!

I've recorded material where I've made use of deep space. For "Rosencranz and Gildenstern" I scattered my voice actors across the recording space and had them shouting back and forth. The recording then had the natural differences in level and spacial cues of the physical placement of the actors.

You notice I haven't said anything about gear. Personally, I currently use a couple of condenser mics, small and large diaphragm, and some speciality mics. I run them, these days, through a firewire interface that has phantom power and I record directly to hard disk on my Powerbook...usually using Audacity. When I have a chance I run several different mics at one time; in case there is a problem with one, or in hopes one of the alternates will have a sound I like better.

I've recorded plenty of stuff off my Sony Minidisc, though. With the noise floor of live theater, it is good enough, and it is much more portable.

In my pack are also a couple of tripod boom stands, shock mounts for the condensers, and a fishpole boom. I don't usually use pop gags...I probably should invest in one for my kit though. Lately I've become very enamored of the sound quality and flexibility of recording from a fishpole instead of tripod or hand-held. It gets you out of the actor's eyeline and gives them room to move around, and the high position controls breath pops and script noises.

I don't start and stop recording. I start recording the moment the talent gets the script, and I keep running until they leave, only stopping long enough to save the file.

Last things in the kit? Headphones, and if possible, powered monitor. Although no-one, including actors, likes how they sound on a mic, voice talent often asks anyhow if they can hear "how it came out." Directors, when you have one, tend to demand it. So having a playback system on you is a nice gift for them and will let them finish off the voice session with a feeling of a job well accomplished.

And that's when your job continues; to take the raw takes, identify the good takes, listen for problems, tame the wild levels, duck the errant noises, clip the extra-long spaces, and all that other editing that makes a good vocal session into a professional-sounding performance.

Friday, December 24, 2010

A Freight Train's A' Coming

I am going to try to have a blog entry every day.  Some may be very long.  For a while, some are going to be historical, transferred over from a previous journal.  Like this one:

There's way too much going on right now. I'm getting back from work with about enough strength left to eat something then fall into slumber. In the morning, check emails and its off to the races again.

Well, today maybe I'll snatch a few minutes to talk about the freight train we theater folk call "tech."

A play is a complex mechanism. A musical is even more so. So many different elements need to come together, so many interlocking parts. But this is almost never a contemplative process, of slow discovery and experimentation.

Instead it's a fully-loaded freight train. The wheels start rolling the day the season is planned -- even before the scripts are selected. And from that moment on, that train is in motion, heading for that Opening Night. There will be 2,000 bodies in seats, paying ticket holders, at 8:00 Friday the 21st, and even Casey can't stop that runaway train.

So at every step through the process you are aware of that countdown. There is never quite enough time. Always, you juggle what you want in the show against what you have time for. And, always, you are guessing. Experience and paperwork help. But basically it comes down to a series of judgment calls, projecting how long an idea is going to take to implement and checking it against that iron-bound schedule.

It is, as I said, a complex engine. By the time you hit Tech Week that train is a juggernaut. There are so many different people and so many different processes that are also on this "just in time" arc of completion, if one department fails the whole thing comes apart like a house of cards.

If the set ain't ready on focus date, the lights can't get in. The painters can't complete their work. Rescheduling lights means someone else needs to be kicked off -- but the choreographer needs those last four dance rehearsals promised to her, the orchestra needs that sit-sing, and the running crew is going to be there to start moving scenery on Tech Sunday, regardless of whether lights are ready or not.

If someone or something stumbles really badly the wheels come off the contraption completely. It is a place where only the really good theater people shine; the people who can brainstorm a new solution that gets the thing back on schedule (while still preserving some semblance of artistic integrity.)

Only very early on can you jump off the train, jog along your own footpath of discovery, and swing back aboard with whatever wildflowers of inspiration you may have picked. As you get closer to Tech Sunday, the speed of the train ensures that you stay with it, clinging on the sides, as it charges towards Opening Night like the Silver Streak heading for the Chicago and North Western Railway Terminal.

And that is perhaps the saddest aspect of the whole process.

At every stage (as it is with every project) you have to pick one of the many possible paths. As you get closer to completion, all those early choices will either support you or haunt you. But unlike a painting, you can't step back, scrape off the canvass, and try again. Opening Night is going to hit whether you are ready or not. And too often, that means you open with the stupid idea you had six weeks ago, before you even saw the blocking or met the costumer or heard the music, and not the great idea you got just as the crew was starting to hang the plot.

Like the movies, most of the people involved sign on in later parts of the project. Just to give a basic perspective, here's about how it goes;

Back a year prior or so, the theater needs to plan a season and get the brochures out. Theaters are largely dependent on season subscribers, and that program is what tempts them into signing for a season's worth of tickets in a year that hasn't even started yet. You might have Producer, Artistic Director, a hand-picked Director or two, and in some theaters a liason to new playwrights, at these meetings.

When the shows are picked this small team then negotiates for the rights. And right here is the first big switch point that can throw a train, when it has just left the city and is on open track. Because there is always the possibility that some bus-and-truck show will come into one of the big union houses around you and pull your rights. Apparently, a production of "Gypsy" at a High School in Alameda threatens ticket sales to the massive remount coming out of Broadway and playing down in Union Square.

Anyhow. The show is picked and the top people look for directors and designers. There is usually some effort to construct a team; to look for people whose styles match, or whose personalities don't clash too badly. You hire a composer if you want a through-composed sound design: an engineer if it is more like a musical. You don't put a Kubrick and a Hockney in the same production, but team a neo-realist with a neo-realist instead.

Designers in the theater world are very busy folk. At my level, I get from $500 to $1400 for a design. My process from first meeting to Opening Night spans as much as six months. Which means, like the majority of us, I'm working on half-a-dozen shows at the same time. (For some reason, a good half of the lighting designers in this area are also Master Electricians for some other theater. Which is to say -- they also have a day job to make that rent!)

(Which also means, considering the relatively small number of people in this end of the industry, you run into the same guy on opposite ends of the ladder; one day you are a day hire hanging his plot, another day, he's hanging yours!)

That first two-to-six months is meetings. These are, when things go well, when the basic show gets conceptualized. In general the big staff meetings are just touch-base, how's progress sorts of things. The real creative ideas get hacked out in one-on-ones, usually involving the director as one of the ones.

In what I think of as the perfect model, the first "person" to speak is the script. Well, that should go without saying. The next up is the Director. From the Director comes the overall concept of the show. It is expected that the Design staff will come up with creative ideas far beyond anything the Director had conceived. But the Director still has the final say.

The Directors I have loved working for were those that had a concept they could express in a single sentence. The wonderful Aldo Bozzonini was one of these. The first meeting for Lillian Helman's "The Other Side of the Forest," he said "I see this family as a pit of vipers." We took that theme and ran it into the ground, delivering a steamy, creepy, and bitterly poisonous production.

Personally, I feel that for most shows lighting designs are deeply dependent on, and should follow after, the scenery and blocking. As a Lighting Designer I see myself as largely helping the set to be what it wants to be to best support the Director's concept and the needs of the play. Other designers see it differently; they want to lead off with a concept and force the other departments to support them.

This does amplify part of what makes the train take so long to come up to speed and so impossible to slow when it gets there. As a lighting designer, I am largely dependent on the shapes and colors of the scenery, and the blocking of the actors.

Back to that schedule. From as long as eight weeks or as short as five weeks before tech, the cast meets. Call-backs were probably just a two weeks prior to the First Reading, and it is not unusual for a key role to be cast literally on the first day of rehearsal.

It takes a cast a minimum of a week to get off book. The first week, and often the second week, is over before the show is blocked. Which means, depending on the particular theater's schedule, by the time you've actually got the entire cast in the hall doing the play in some semblance of how it is to be done on stage, you have from one to three weeks before Tech.

It takes from two to five days to hang and focus a show. Depending on the show, it can take 24 hours or more to set the looks and write the cues. This means, in a typical small theater setting, the lighting designer is looking at under a week from the first full rehearsal they've been able to watch to the moment they need to hand completed plot and paperwork to the electricians.

If you are lucky you've conceptualized already. You've had a look at swatches from costumes and chips from scene painting, and you've met privately with the director to hash out general ideas. But this means you have, basically, a mere handful of days to do the largest part of what a Lighting Design is. To break down concepts into angles and colors, to work out how to stretch the inventory to give you coverage and options, and struggle through all the compromises of available hanging positions and desired angles (whilst simultaneously juggling in your head a minutia of details of circuit runs, pipe space, dimmer capacity -- it's like trying to write a program from the bottom up, dealing with all the details simultaneously instead of being able to divide the problem into manageable chunks).

That train is in motion. In five days, the Master Electrician and the lighting crew get added to the team. In another five, board operator, follow spot operators, and your cues are in the hands of a Stage Manager who intends to be calling that show.

And here is where you do the most creative part of your work: in the horrible pressure of that too-soon deadline. It won't be physically possible to make up for anything but the smallest mistakes. There simply isn't enough time (or the over-hire budget) to scrap the plot and hang another. When you hit focus, it either works (with some adjustment), or it doesn't.

When you start writing cues -- and when you get into tech and you are on headset to the booth, staring at actors on stage who look like walking dead (when they aren't in the Valley of Shadows), and you've got a panicking Director yammering in your ears -- all of your art comes down to what you can do with tweaking a few knobs. All the important decisions are behind you.

(Not to say it doesn't happen. I've been there. I've been the Master Electrician when we had to re-hang half a plot, or swap out every single color. I've been there when lights couldn't make Tech, and we had to catch up during Tech Week and only a day or two out from the first audience. Those are the days when you aren't just riding the train; you are running pell-mell along the top as it hurtles towards a low tunnel.)

The choices narrow and the consequence of veering from the path become greater and greater. Not just for your own efforts and that of your crew, but for the cascade effect to every other department and person. The well-prepared but edgy actor who has to stop in the middle of an intense scene so Lights can fix a cue. The costumer who is trying out a new fabric and needs to see if will work under the lights. The photographer who is picking up the pictures that will be in the local paper and on the lobby display.

When it does well, it is a wonderful kind of artistic collaboration under intense pressures. You are inspired by what the other people are doing, and by their ideas, and they in turn are inspired to new creative choices by what they see in yours. It is the closest thing I know to a jazz combo jamming before a live audience. Because all that interplay is happening, all those ideas are coming together, being built by everyone present.

And if anyone slips, it all comes crashing down in a moment.

Thursday, December 23, 2010

Better Living Through Chemistry

There are several tools I have found extremely handy as a theater technician; chicken sticks, maglights (now, alas, superseded by LED flashlights), but many of those tools are really more like, well, substances.

10) Varathane Diamond Finish. Mist it over the paint job, give it a couple of coats, and even tap shoes won't scratch it when you are done. Available in matt through high gloss.

9) Diet Coke. Believe it or not, this is the secret ingredient you add to the mop water to keep dancers from skidding off your Marley.

8) WD-40. Often abused and sometimes maligned, it is the quick and dirty lubricant and cleaner for most things metal that you don't mind getting greasy and smelling of chemicals.

7) Sillie spray. The good brands of silicon spray lubricant leave an invisible film of liquid teflon to help drawers glide, pulleys work cleanly, and even stop floors from squeaking.

6) Foam mounting tape. Hard to clean up, but fast and secure for holding those little bits of audio electronics to a table or shelf where they won't get knocked down and trod on. Works really nice inside project boxes.

5) Board tape. Also known as artist's tape, this low-residue smooth-surface white tape is the ONLY tape that should be applied to the surfaces of mixers and other audio gear in order to mark channel assignments.

4) Nexcare flexible tape. The only hypoallergenic micropore tape for sticking wireless microphone elements to actor's fragile faces. You can buy it in rolls and save a bunch over the (handier) tape dispensers.

3) Goo Gone. Nothing removes the slimy residue of old tape from cables like this stuff. I buy mine at Wallgreen's.

2) Tuner Spray. Specifically, Caig de-oxit. Don't get the Radio Shack crap, pay the money for Caig. This stuff penetrates any switch or potentiometer to get out the crackles, and should be sprayed on every connector as part of your maintenance. Don't use it on faders though.

1) Gaff tape. Gaffer's Tape is NOT duck tape. It is stronger, it tears cleaner, it is available in matt black, it bears up under heat and moisture better, and it doesn't leave anywhere as much nasty sticky residue (nor turn into what duck tape turns into after a few weeks out in the open). Do not bring duck tape into a theater. Ever.

So What is Theatrical Sound?

So, imagine you are in a band, or in sales at the local Circuit City, or maybe just know how to solder. And you have some friends at the local community theater, or in the drama department of your school. And they say; "Hey...we're about to do Sound of Music for our Fall you think you could come down and help with the sound?"

Especially at a smaller or less experienced theater, they may not know what it is they need, and what your job will eventually entail. Let me lay it out for you up front; your responsibility becomes the total sound environment of that building.

Everything. If you want to make that show the best it can be, your job does not stop at putting some microphones on the actors. It begins with looking at the acoustical space (say, finding out if you can drape a particularly reflective wall, or if the noisy HVAC can be turned off during the performance), extends down into the pit (you might even end up helping to program the synthesizers!) and can also encompass backstage monitoring and paging systems so the actors in the dressing rooms can figure out where they are in time to make their entrances.

But let's stop and quantify this. Basically, the tasks you may be faced with divide into the following categories:

1) Vocal Reinforcement (amplifying the actors so that the audience can hear and understand them properly).

2-a) Orchestra Reinforcement (amplifying, shaping, clarifying the orchestra mix for the benefit of the audience)...

2-b) Playback (the systems to send pre-recorded music to audience and to actors).  ((Or not 2b; that is the question!)

3) Foldback (sending orchestral material to the actors, so they can find their place and pitch; sending vocal material into the pit so the conductor can hear the singers in return; and sending specific instruments around the pit so different members of the orchestra can hear what they need to hear to play together).

4) Effects Play-back and Processing (sound effects, and special processing for environmental affect or to shape specific performances).

4-b) Practical Effects (real doorbells, phones and the like, wired to be operated as effects but part of the total sound picture).

5) Monitor and Hearing Enhancement (not always tied together, the systems that allow cast in the Green Room to hear where they are in the performance, and late-comers to watch and listen from the lobby before there is a convenient place to seat them; AND the system made available to the hearing-impaired patron, otherwise called the Assistive Listening Devices).

6) Communications (in large venues a proper intercom and paging system already exists. But you still might end up having to repair it, expand it, patch into it, or otherwise work with it).

7) General Noise Abatement (being the person to speak up about noisy fans and lighting effects, putting rugs backstage, acoustic treatment of the building and the pit, etc.).

Let me break down this seemingly indigestible list with a look at what I've got in a show that opened this weekend:

There are a bit over twenty wireless microphones -- body mics -- on the cast. These are mixed by the house technician from a booth in the back of the auditorium, and sent into the house speakers.

In addition, there are a half-dozen mics scattered around the orchestra pit, basically mic'ing by section, which are also sent to the house speakers during certain moments when the orchestra (who is in a covered pit) would otherwise not be clear (or powerful) enough.

The last major thing the audience hears (besides -- no small matter! -- the direct acoustic energy from the on-stage performers and the pit orchestra) is sound effects, played back from a laptop operated from the booth, and sent to different sets of speakers dependent on the desired placement in the acoustical space (aka, where the sound should seem to be coming from).

In addition to this, selected instruments (mostly piano and bass) are picked up by microphone and DI and sent to a combination of floor monitors and hanging monitors surrounding the acting area, reinforcing the parts of the music the cast most needs to hear.

Going in reverse, the wireless microphones are submixed down to a small vocal monitor at the feet of the conductor, in the pit, so she can hear the singers.

So that's essentially three semi-independent systems, with their own set of mics and speakers, running in different directions. In a different house I work at, we have two different sets of speakers aimed at the audience; the vocal reinforcement is sent to one and the orchestration is sent to another; that separation in space makes it easier for the audience-listeners to hear them as separate elements.

Both houses have existing monitor and communication systems, at least, so I didn't have to deal with those. At another house where I worked as Master Electrician, I can describe those linked but essentially independent systems in detail:

A Clear-Com base station, serving two channels of headset-type intercoms with stations and jacks spread around the theater (backstage, booth, grid, etc.) The Stage Manager's headset was tapped to allow him or her to use the headset in paging mode, speaking over the monitor feed normally sent to a series of 70-volt speakers spaced around the dressing rooms and Green Room.

This monitor system was, in turn, driven by a single microphone hung over the stage. The Sennheiser hearing enhancement system piggy-backed on this same microphone, driving three infrared emitters that could be picked up on headset units given out by the box office to hard-of-hearing patrons.

(There was also in this house a third communications system, consisting of a 48v Bogen intercom system. It was only used for communications between House Manager and Stage Manager.)

The show and systems you end up on may be as simple as a CD player, a pair of speakers, and two floor mics; but it still helps to think in terms of Reinforcement/Processing, Foldback, Effects, Communications, and Monitor.

For each system, and at each stage of the game, you should be thinking of the total sonic environment. What does the audience ultimately need to hear? What is important to tell the story, communicate the emotion, be honest to the harmony and structure of the music?

You start and end with listening. Before you throw a piece of technology at a problem, see if it is a problem, and if that is the best place to fix it. Maybe, rather than mic the bass, piano, drums, oboe, and strings, just to get them up level with that one loud trumpet player, there is a way you can take the trumpet down instead. Talk to the conductor and investigate options before you start throwing microphones around.

Same for the cast. Same for everything. Is there a door slam in the show? If the shop can rig up a proper wooden door in a frame, the actor can slam the door; it will nine times out of ten sound more realistic, and ten times out of ten have better timing.

Which is not to say all technology should be avoided. But what you should look at is the basic problem you are trying to solve, and solve it in the most elegant manner possible. Maybe a couple of floor mics will handle the chorus vocals fine, and save you from having to cover an entire cast in wireless mics. Conversely, maybe a pre-recorded telephone ring is simpler than running wire down to an on-stage phone and making or purchasing a bell ringer.

Always keep in the back of your mind that you will be stuck with this for five weeks of constant performances, operated by tired, stressed people who may not be that technical, and if an effect flubs or a mic goes out the whole intricate ballet of acting and dance and lighting that is a performance may fall apart. Whatever you do, make it as much as possible bulletproof and easy to diagnose and repair.

Be prepared to defend the necessity of your choices. A truism quoted by sound designer and composer James LeBrecht; "Everyone in theater knows two jobs; their own and sound." People will be constantly shocked by the complexity and price of what you want, will be unbelieving of how sensitive some of the details (like specific mic placement, like routing sound cables away from electrical wiring) can be, and even of the basic physical principles involved.

You'll find yourself in more than a few "Scotty" moments ("I canna change the laws of physics, Captain!") Because, as James LeBrecht noted, everyone thinks they know how sound works. You practically have to beat them over the head with a hard-bound, 500-page "Fundamental Principles of Musical Acoustics" before they get that their naive intuition about some specific sonic situation may be WRONG. Just because, for instance, they've never heard of intermods, or the White Spaces edict, doesn't mean the RF spectrum is going to play along and give you trouble-free performance on those finicky wireless microphones.

But I digress...this leads to a rant instead.

And since this essay was too poorly organized to begin with, and I'm getting hungry now, I'm going to stop here. Perhaps when I return to the subject I can describe just what a vocal reinforcement system looks like, including some of the specific choices and techniques for placing mics, protecting transmitters, and the like.