A place I work at decided they didn't need a designer for their next show, but they want to pay someone a stipend to "run the microphones." They offered this stipend to me. As my business partner put it, that is just insulting. I've been trying to figure out exactly why it is insulting, so I can try to communicate that to them and we can work our way towards a better solution.
The essential problem, of course, is that they are trying to get professional services under the table. "We don't need a mechanic, we just need this engine repaired. We'll pay you for doing and undoing the bolts, though." "We don't need a programmer; we already figured out what the code needs to do. We'll pay you to type it into the computer, though." "We don't need a musician, we just need someone to play these notes. We'll pay you for the hour you spend at the wedding, though."
The latter case is perhaps the most subtle. There is a reason musicians get paid for "services," not for hours. Part is that you can't go from one two-hour performance to another fast enough to make a living from hourly. If you didn't get that minimum expressed as four-hour blocks, you wouldn't be able to do enough work to make the rent. Another part is you aren't just renting a person; you are renting their instrument. But more than that; you are renting the upkeep of that instrument, the tuning, the practice, the repairs, the time in music school before that. You are paying, in short, professional wages because it is a job that takes professional skill. The cost of acquiring and maintaining that skill needs to be pro-rated into the cost of that musician's time.
Professionals also come with certifications, with professional contacts, with insurance and bonding, and with professional responsibility; their job does not end legally or morally when the two hours they actually spent on site are up, but they remain responsible in some way for what they designed or built.
So why would someone think they could get away with doing an end-run around a necessary technical and design skill and hire someone under the same conditions they'd hire a day laborer?
The reason likely lies with a misunderstanding of what a designer does. And hence the title of this entry.
A sound designer doesn't make sound effects. Well, one of their end results may be sound effects, but these are the product, not the process. You do not have a shopping list of sounds needed which you "purchase" from a designer as if it were a grocery list you were filling.
I've been there, actually. I had one particularly horrid show which, due to my own inexperience as much as theirs, I let an assistant director lead me by the nose. He had decided, and he convinced me, that he needed to approve all the raw effects. As it turned out, he was lying about that; the director wasn't depending on him to do this. Of course he lied to her and claimed I'd asked him to do this in the first place. We spent a merry month with me creating effects at home, sending them to him, and him rejecting 90% of them for reasons of his own without even giving the director a chance to listen to them.
Anyhow. An actual sound designer both decides if and how sounds indicated in the script should be realized, and if and how sounds that are NOT explicitly indicated in the script should be realized. In the former, the script may state, baldly, that "birds are heard chirping." It takes the intelligence of a designer (or a canny director) to realize that this would distract from the intense and intimate scene in the middle of the act, and the birds are better established at the top of the act, then faded away slowly.
In the latter, it takes someone whose responsibility is sound to realize that even though the script makes no indication of river sounds in the scene, a little use of them subliminally will suggest the idea of movement and transition that is the emotional drive of that particular scene.
And in both, it takes someone with specific skills in sound to realize there is no such thing as "the sound" of a gunshot, or a train, or a cat, or anything else. There is an infinite variety of sounds, and nuance, and you need trained ears to find the one that sounds right for THAT specific moment of THAT specific play in THAT specific theater (with all of its own peculiarities of acoustics).
I have spent years gaining the skills to know why a "cat" that sounded great when you listened to it on your PC's speakers doesn't sound right when it is played through the theater's speakers -- and even better, how to fix it. This is skill that took a great many hours both in the theater and outside of it (in "professional development," as we call it...classes, reading, networking, research, experimentation, training, etc.)
A sound designer does not mix mics. All the important decisions regarding the mix are made before a single fader is moved. Choice of position, of element, choices in EQ and processing, external design choices such as acoustical treatment of the pit, the physical location of an off-stage singer, and so forth, are all vastly important. Reducing the job to just moving the faders is like telling a graphic artist to work with nothing but an eraser and a can of white household latex. You don't take away the very tools you need to do the job! Nor is it exactly fair to claim you are only hiring someone to move the faders (and thus can rationalize paying them shit wages) when in reality the person will have to do much more.
The problem, beside the offered insult of trying to get skilled help for unskilled wages, is that when you are pretending there isn't a skill involved you are removing some of the ability to use that skill.
As a DESIGNER, I can say to a director, "This doesn't work, for good technical or artistic reasons which I can defend on the basis of years of experience. Here is my suggested change."
As an OPERATOR, I can't do this. Whether it is right or wrong, and even if there is a good solution at my fingertips, I have to nod and say "Yes, sir!" regardless of what short-sighted or stupid request is being made.
It removes from the professional the ability to do their job right. As well as removing the courtesy of being treated as a professional, and the wages that are necessary to maintain those professional skills. At the best, you are working behind the back of the client, replacing the starter they don't know they needed replacing (and going out of pocket for the parts), or writing the code they didn't realize actually needed to be written (even if you have to work on it at home). At the worse, they catch you at it. "How dare you play that note sharped! I don't care what key signature it is in! What the hell is a key signature anyhow? Now just play the note as written!"
Why am I insulted at this offer? Why should I be insulted at the chance to be treated as an idiot and a peon while secretly trying to save their show, getting shit wages to do top-level work while every bit of good work I do risks being either undone or actually getting me in trouble for doing it in the first place?
People, find yourselves a monkey. Maybe after your first few paying audiences walk out at intermission complaining about the sound you might get a clue.
Tricks of the trade, discussion of design principles, and musings and rants about theater from a working theater technician/designer.
Sunday, January 9, 2011
Friday, January 7, 2011
The Basics of Mic'ing a Cast III
Now we can actually hang mics on actors. This requires at least passing attention, and perhaps coordination, with costumes, make-up, and wigs. My personal philosophy is that we are never going to be able to hide the fact that we are re-enforcing their voices; I don't try to hide the microphones, but I do try to keep them from being distracting.
First, the transmitter. The transmitter is about the size of a pack of cigarettes. Most take a pair of double-A batteries, and most batteries can safely get you through two performances. I can not underline how important it is to track and monitor the battery usage. Make an iron-clad routine of checking and replacing batteries; every mic needs to be checked before every show. It is NOT fun to have the batteries die when an actor is already on stage.
This means batteries are not a small cost. Multiply, say, 16 channels of wireless by two batteries each by five weeks of performance by five performances a week and you've got several hundred bucks worth of batteries.
The modern rechargeables -- the nickle-metal hydride stuff -- now work in most mics (the older batteries did not put out sufficient voltage). There is a bit of a start-up cost for them, and you have to be even more methodical to make sure they get back in the chargers after every performance.
Most theaters have constructed a belt for the mic pack; this is a wide belt of elastic with velcro fasteners and a pouch to hold the actual transmitter. Usually costumes will build and track them. My preference is to treat the belt as part of the costume; one the actor has found one that fits, it stays with their costume and is washed with their costume. Other theaters, it becomes part of the job of the Mic Wrangler to hang up belts to dry, and to put the mics in them before each performance.
The placement of the belt depends on the costume and the physical needs. Many women hang it low, so it doesn't obscure the natural waist. Actors that have to sit down a lot and do other physical actions place them higher, with the belt wrapping around their upper abdomen. Some need to place it as high as the small of the back, for which you can make a special "child-carrier" style mic belt. For some roles involving a great deal of tumbling and rolling about, a placement under one arm, like a shoulder holstered pistol, will work better.
And then you will have a show like Gypsy, or a role like "Princess Tiger-Lily" where there isn't enough clothing to hide the mic belt. For a recent Peter Pan we consulted with wardrobe and modified her bra so she could carry the transmitter there. For a production of Cabaret I made a mic bag from black taffetta with a thinner belt so when the actress disrobed it just looked like part of her garter belt. Another option is, strange as it may sound, to place the transmitter in the actresses' wig. Now you understand why I mentioned you may need to coordinate with wardrobe and wigs!
And while we are talking naked people... Before the transmitter goes into the pouch, it goes into a condom. Actors sweat, and water is never a good mix with electronics. (A warning; some actors have latex allergies. Actors, it seems, have LOTS of allergies. It is not easy to feed them!) This means someone has the unenviable task of shopping for extra-saver packs of extra-large, unlubricated condoms. Which when they are in the same order with double-A batteries and surgical tape and alcohol prep pads (very nice to have to wipe the oil from an actor's face before taping on a mic) can get you some very strange looks indeed.
Since a lot of people seem to be hitting this old entry, I'm updating with some new comments. Just worked with mic-in-wig on, of all shows, "The Drowsy Chaperone." JANET has backless dresses and lots and lots of quick-changes. And wore a wig for the role. So what we did; she french-braids her natural hair up on her head and pins that. We double-condom the mic pack and stuff most of the extra mic cable inside the condoms. She puts it on top of her head and pinned the wig cap over hair and pack, pulling the element out just enough to bobby-pin it near her hairline. The wig goes on top of that and look, ma, no wires.
Many wireless mics come packaged for lecture use; with the intention of clipping the element to tie or lapel or shirt collar. This does not work well for theater. Besides the fact that as the actor's head moves, the sound level will go up and down (they are moving in relation to the mic), placement on the upper chest puts the mic in the shadow of their chin. You end up with a hollow, displeasing sound.
The most natural sound is, for most people, from a mic placed high on their forehead, just peeping out from behind their hair. Many wigs make this an obvious choice -- and the element can be clipped into the wig cap itself. However, many theatrical shows are period dramas, and far too many involve hats (as well as other headgear and wigs that do NOT support having a microphone under them).
I just finished Shrek, Jr. with a grade-school cast. Because of innumerable mic changes and the general skill level of the children the company choses to use lapel positions for everyone. I was able to push a few microphones on to caps, ears, and otherwise out of chin shadow, but there was still significant volume changes and handling noise. I ran bus compression at near limiter levels and extremely fast attack, and rolled off the bottom up above 200 Hz and that tamed the handling noise a little -- at least to where it didn't deafen everyone when it happened.
This is why my fall-back position is ear. At the basics, the element is positioned on the cheekbone just forward of the actor's sideburns. The cord runs over the top of their ear, behind the ear, then is tucked up along the hairline to the back of their head...at which point it descends into their shirt. It is secured here by surgical tape; one piece just behind the element itself, one piece behind the ear, and one at the nape of the neck (for actresses with longer hair, C7 or the seventh cervical vertebrae is the default.) For an actress with a low-cut dress, the usual is to tape down her spine. Remember when placing the tape at the back of the head to have the actor turn away from you; you want to leave just enough slack to permit full and comfortable head movement without pulling the tape off!
(Nexcare flexible tape is the top choice for theaters...just like Goo Gone is the secret potion for cleaning tape residue OFF the mic cords.)
The ear position can be made more secure and more reliable (aka ensuring the mic is placed the same from night to night) if you attach it to a wire clip. My current system is to hand-bend an ear clip from the light-weight coat-hangers you get from the dry-cleaner. My design goes over the top of the ear, but it doesn't wrap around the bottom; it ends in a ball that sits just behind the ear lobe. Once bent (the wire is flexible and can be re-bent to fit each actor) I file off the rough edges and cover the entire wire with heat-shrink tubing. Then I heat-shrink the mic element to it.
The clip will still need a little tape to re-enforce it. But for most actors it is more comfortable and often more secure. However, it is also more visible, and less flexible.
Because, just like everywhere else in sound, it is all about location, location, location. An inch below the cheekbone and the sound will be heavy and thick (you pick up more resonance through the cheek). Point too high and it will be nasal, get too close to the mouth and you'll hear lip noise and possibly even breath pops. And if the actor is going to use a phone, lie on the ground, or (as is usually the case!) sing a duet, you need to plan and try to get the mic to the opposite side.
As said above, many wireless sets are packaged for presenters, and thus you'll have some black soup can of an element that was designed to clip to a lapel. There really isn't much you can do to hide it. Also, the typical elements packaged with a wireless range greatly in quality. The Shure SM84/86 are not bad sounding, although they are giant soup cans that are difficult to tape to an actor's face. The Shure SM93 is the cheapest element they make, and the smallest, so of course everyone buys it. It is fragile, however, and sounds like crap. Sennheiser's otherwise lovely EW series come out of the box with a black golf ball...but at least the golf ball sounds decent.
Although there are other good elements out there, and you will hear lots of arguments about the pros and cons (especially when you get into the high end), the wireless equivalent of the SM57 is Countryman's B3. The element is match-head size, the cord robust, it is available in six colors that at least get you closer to matching skin tone and hiding the mic, it sounds great -- and it costs only about twice what you spend on a '93. Given how much longer it lasts (and you can actually clean them, too, in alcohol) the savings are obvious.
Getting towards the higher end, four hundred bucks will get you a Countryman E6, which is their version of the "Madonna-mic." Unlike the latter, the B6 is pin-head thin, with a whisker of a wire boom. From forty feet away it is completely invisible.
Now, you can actually paint cords to try to hide the mics better. Painting down is better than painting up; lighter colors look like a scar, but darker colors will look like a shadow, and be more invisible. I use Deco-color paint markers myself. They aren't easy to apply, and tape residue gets more gummy on top of them, but they don't seem to hurt the mic, they last for a while without coming off on clothing or anything, and you can clean it off with Goo Gone.
I suppose I should have also mentioned the "Halo" rig, which is a scheme using a thin elastic to secure a microphone at the hairline. I've never built one myself, though; I've always clipped to a wig cap, used hair clips, or taped to the forehead. As I said; my philosophy is that we can't hide that we are using them, so I don't try to make them completely invisible. I just try to make them not distracting...I really don't want to make the cast look like they were replaced by Secret Service agents.
Concluded with:
The Basics of Mic'ing a Cast IV
First, the transmitter. The transmitter is about the size of a pack of cigarettes. Most take a pair of double-A batteries, and most batteries can safely get you through two performances. I can not underline how important it is to track and monitor the battery usage. Make an iron-clad routine of checking and replacing batteries; every mic needs to be checked before every show. It is NOT fun to have the batteries die when an actor is already on stage.
This means batteries are not a small cost. Multiply, say, 16 channels of wireless by two batteries each by five weeks of performance by five performances a week and you've got several hundred bucks worth of batteries.
The modern rechargeables -- the nickle-metal hydride stuff -- now work in most mics (the older batteries did not put out sufficient voltage). There is a bit of a start-up cost for them, and you have to be even more methodical to make sure they get back in the chargers after every performance.
Most theaters have constructed a belt for the mic pack; this is a wide belt of elastic with velcro fasteners and a pouch to hold the actual transmitter. Usually costumes will build and track them. My preference is to treat the belt as part of the costume; one the actor has found one that fits, it stays with their costume and is washed with their costume. Other theaters, it becomes part of the job of the Mic Wrangler to hang up belts to dry, and to put the mics in them before each performance.
The placement of the belt depends on the costume and the physical needs. Many women hang it low, so it doesn't obscure the natural waist. Actors that have to sit down a lot and do other physical actions place them higher, with the belt wrapping around their upper abdomen. Some need to place it as high as the small of the back, for which you can make a special "child-carrier" style mic belt. For some roles involving a great deal of tumbling and rolling about, a placement under one arm, like a shoulder holstered pistol, will work better.
And then you will have a show like Gypsy, or a role like "Princess Tiger-Lily" where there isn't enough clothing to hide the mic belt. For a recent Peter Pan we consulted with wardrobe and modified her bra so she could carry the transmitter there. For a production of Cabaret I made a mic bag from black taffetta with a thinner belt so when the actress disrobed it just looked like part of her garter belt. Another option is, strange as it may sound, to place the transmitter in the actresses' wig. Now you understand why I mentioned you may need to coordinate with wardrobe and wigs!
And while we are talking naked people... Before the transmitter goes into the pouch, it goes into a condom. Actors sweat, and water is never a good mix with electronics. (A warning; some actors have latex allergies. Actors, it seems, have LOTS of allergies. It is not easy to feed them!) This means someone has the unenviable task of shopping for extra-saver packs of extra-large, unlubricated condoms. Which when they are in the same order with double-A batteries and surgical tape and alcohol prep pads (very nice to have to wipe the oil from an actor's face before taping on a mic) can get you some very strange looks indeed.
Since a lot of people seem to be hitting this old entry, I'm updating with some new comments. Just worked with mic-in-wig on, of all shows, "The Drowsy Chaperone." JANET has backless dresses and lots and lots of quick-changes. And wore a wig for the role. So what we did; she french-braids her natural hair up on her head and pins that. We double-condom the mic pack and stuff most of the extra mic cable inside the condoms. She puts it on top of her head and pinned the wig cap over hair and pack, pulling the element out just enough to bobby-pin it near her hairline. The wig goes on top of that and look, ma, no wires.
Many wireless mics come packaged for lecture use; with the intention of clipping the element to tie or lapel or shirt collar. This does not work well for theater. Besides the fact that as the actor's head moves, the sound level will go up and down (they are moving in relation to the mic), placement on the upper chest puts the mic in the shadow of their chin. You end up with a hollow, displeasing sound.
The most natural sound is, for most people, from a mic placed high on their forehead, just peeping out from behind their hair. Many wigs make this an obvious choice -- and the element can be clipped into the wig cap itself. However, many theatrical shows are period dramas, and far too many involve hats (as well as other headgear and wigs that do NOT support having a microphone under them).
I just finished Shrek, Jr. with a grade-school cast. Because of innumerable mic changes and the general skill level of the children the company choses to use lapel positions for everyone. I was able to push a few microphones on to caps, ears, and otherwise out of chin shadow, but there was still significant volume changes and handling noise. I ran bus compression at near limiter levels and extremely fast attack, and rolled off the bottom up above 200 Hz and that tamed the handling noise a little -- at least to where it didn't deafen everyone when it happened.
This is why my fall-back position is ear. At the basics, the element is positioned on the cheekbone just forward of the actor's sideburns. The cord runs over the top of their ear, behind the ear, then is tucked up along the hairline to the back of their head...at which point it descends into their shirt. It is secured here by surgical tape; one piece just behind the element itself, one piece behind the ear, and one at the nape of the neck (for actresses with longer hair, C7 or the seventh cervical vertebrae is the default.) For an actress with a low-cut dress, the usual is to tape down her spine. Remember when placing the tape at the back of the head to have the actor turn away from you; you want to leave just enough slack to permit full and comfortable head movement without pulling the tape off!
(Nexcare flexible tape is the top choice for theaters...just like Goo Gone is the secret potion for cleaning tape residue OFF the mic cords.)
The ear position can be made more secure and more reliable (aka ensuring the mic is placed the same from night to night) if you attach it to a wire clip. My current system is to hand-bend an ear clip from the light-weight coat-hangers you get from the dry-cleaner. My design goes over the top of the ear, but it doesn't wrap around the bottom; it ends in a ball that sits just behind the ear lobe. Once bent (the wire is flexible and can be re-bent to fit each actor) I file off the rough edges and cover the entire wire with heat-shrink tubing. Then I heat-shrink the mic element to it.
The clip will still need a little tape to re-enforce it. But for most actors it is more comfortable and often more secure. However, it is also more visible, and less flexible.
Because, just like everywhere else in sound, it is all about location, location, location. An inch below the cheekbone and the sound will be heavy and thick (you pick up more resonance through the cheek). Point too high and it will be nasal, get too close to the mouth and you'll hear lip noise and possibly even breath pops. And if the actor is going to use a phone, lie on the ground, or (as is usually the case!) sing a duet, you need to plan and try to get the mic to the opposite side.
As said above, many wireless sets are packaged for presenters, and thus you'll have some black soup can of an element that was designed to clip to a lapel. There really isn't much you can do to hide it. Also, the typical elements packaged with a wireless range greatly in quality. The Shure SM84/86 are not bad sounding, although they are giant soup cans that are difficult to tape to an actor's face. The Shure SM93 is the cheapest element they make, and the smallest, so of course everyone buys it. It is fragile, however, and sounds like crap. Sennheiser's otherwise lovely EW series come out of the box with a black golf ball...but at least the golf ball sounds decent.
Although there are other good elements out there, and you will hear lots of arguments about the pros and cons (especially when you get into the high end), the wireless equivalent of the SM57 is Countryman's B3. The element is match-head size, the cord robust, it is available in six colors that at least get you closer to matching skin tone and hiding the mic, it sounds great -- and it costs only about twice what you spend on a '93. Given how much longer it lasts (and you can actually clean them, too, in alcohol) the savings are obvious.
Getting towards the higher end, four hundred bucks will get you a Countryman E6, which is their version of the "Madonna-mic." Unlike the latter, the B6 is pin-head thin, with a whisker of a wire boom. From forty feet away it is completely invisible.
Now, you can actually paint cords to try to hide the mics better. Painting down is better than painting up; lighter colors look like a scar, but darker colors will look like a shadow, and be more invisible. I use Deco-color paint markers myself. They aren't easy to apply, and tape residue gets more gummy on top of them, but they don't seem to hurt the mic, they last for a while without coming off on clothing or anything, and you can clean it off with Goo Gone.
I suppose I should have also mentioned the "Halo" rig, which is a scheme using a thin elastic to secure a microphone at the hairline. I've never built one myself, though; I've always clipped to a wig cap, used hair clips, or taped to the forehead. As I said; my philosophy is that we can't hide that we are using them, so I don't try to make them completely invisible. I just try to make them not distracting...I really don't want to make the cast look like they were replaced by Secret Service agents.
Concluded with:
The Basics of Mic'ing a Cast IV
Thursday, January 6, 2011
The Basics of Mic'ing a Cast II
Your next problem is establishing a good signal path.
The RF (radio frequency) environment is getting increasingly cluttered. Unfortunately, many of the things found in a working theater, from electric motors, to florescent lights, to the dimmers of the stage lighting system, make radio-frequency noise. So, to a lesser extent, do laptops, cell phones, and electronic gear like keyboards and signal processors.
Add to this hash, the very small group of frequencies we are legally allowed to use.
Back before the DTV change-over, analog television stations were not permitted on adjacent bands within the same service area. This meant, essentially, that every other TV channel (within a particular service zone) was unoccupied. That's where wireless microphones went. Wireless microphones are low-power, unlicensed, analog transmitters.
The Digital Dividend changed this. The existing broadcast channels were packed in tighter towards the bottom end of the broadcast spectrum, and the upper end (above 698 MHz) was sold off to new users. Also, in several designated service areas, additional frequencies were carved out for new emergency management and public service use.
This means the number of vacant television channels is much smaller. In addition, that remaining "White Space" may now be used for new services (streaming video and the like) offered by Comcast and others. Fortunately, after getting a visit from The Mouse (the Disney Corporation) as well as the NFL, and of course Broadway, the FCC relented and is asking these new users to ensure they don't interfere with pre-existing theatrical systems.
Which isn't really much of a protection, even for major established theaters.
What can you do? Well, for one, check your frequencies. If you have gear in the 700 MHz band, get rid of it. As of June 12 of 2010, you will be breaking Federal Law to use them. You ever see one of those W.W.II movies where the Resistance cell is on their little radio, trying to get word of the new secret weapon back to the Allies, while outside in the streets a nondescript van with a big antenna on top creeps closer and closer? Well, the FCC has those. (Or, rather, the broadcasters who spent a pretty chunk of change in the big sell-off of the spectrum has those.) They will find you.
Second, check your available bands. Shure and Sennheiser both maintain current charts of frequency usage in your area. Sennheiser will print you out a customized chart -- Shure has a piece of software, the Wireless Workbench, which will include that data when it calculates a new set of compatible frequencies.
In addition, if you can, scan the environment you will be operating in. Several modern wireless microphones will perform a scan of the local RF environment, and report back on clear frequencies/interference sources. One or two of these (like the latest Shures) will even connect to a program you can run on your laptop.
Another low-end solution is an all-band scanner. I use a Uniden Bearcat, which covers all of the wireless band, and has helped me to spot intermittent interference sources (like a local pirate radio station).
Done yet? Not quite. Besides primary frequency interference, radio gear is susceptible to combined frequencies. When two radio signals of roughly equal strength enter the air, they heterodyne to create a new frequency. Every new frequency added to the environment adds new combinations, increasing the number of false signals geometrically.
In short, although you might have four wireless microphones that work without interference when tested alone, when you switch them all on at the same time one of them might suddenly start showing interference and drop-outs.
Fortunately, you don't have to do the math yourself. Shure's Wireless Workbench software will calculate possible cross-interference, as well as automatically generate a fresh set of compatible frequencies.
You need to site the receivers so they have a clear view of the talent. Salt water can trim 90 dB off your radio signal, and an actor is largely a bag of salt water. Concrete and steel can also reduce the signal strength. The antenna of most wireless packs radiate most strongly in the horizontal plane, and siting the receiving antenna too high can also lower signal strength.
All modern systems are diversity. The reasoning is this; at any point, a single radio path can be either blocked, or worse, combine with it's own out-of-phase reflection (like the ghost image on an old analog TV). When this happens, signal strength drops, perhaps below detection threshold. Diversity receivers have two antenna, and switch between them depending on which has the strongest signal. To make this work best, though, you want the antenna to be as close as possible to 90' out of phase with each other. Setting both antenna to 45', "rabbit ear" style, is a good compromise to all these different positioning needs.
Next -- actually hanging the mics on the actors!
Continued in:
The Basics of Mic'ing a Cast III
The RF (radio frequency) environment is getting increasingly cluttered. Unfortunately, many of the things found in a working theater, from electric motors, to florescent lights, to the dimmers of the stage lighting system, make radio-frequency noise. So, to a lesser extent, do laptops, cell phones, and electronic gear like keyboards and signal processors.
Add to this hash, the very small group of frequencies we are legally allowed to use.
Back before the DTV change-over, analog television stations were not permitted on adjacent bands within the same service area. This meant, essentially, that every other TV channel (within a particular service zone) was unoccupied. That's where wireless microphones went. Wireless microphones are low-power, unlicensed, analog transmitters.
The Digital Dividend changed this. The existing broadcast channels were packed in tighter towards the bottom end of the broadcast spectrum, and the upper end (above 698 MHz) was sold off to new users. Also, in several designated service areas, additional frequencies were carved out for new emergency management and public service use.
This means the number of vacant television channels is much smaller. In addition, that remaining "White Space" may now be used for new services (streaming video and the like) offered by Comcast and others. Fortunately, after getting a visit from The Mouse (the Disney Corporation) as well as the NFL, and of course Broadway, the FCC relented and is asking these new users to ensure they don't interfere with pre-existing theatrical systems.
Which isn't really much of a protection, even for major established theaters.
What can you do? Well, for one, check your frequencies. If you have gear in the 700 MHz band, get rid of it. As of June 12 of 2010, you will be breaking Federal Law to use them. You ever see one of those W.W.II movies where the Resistance cell is on their little radio, trying to get word of the new secret weapon back to the Allies, while outside in the streets a nondescript van with a big antenna on top creeps closer and closer? Well, the FCC has those. (Or, rather, the broadcasters who spent a pretty chunk of change in the big sell-off of the spectrum has those.) They will find you.
Second, check your available bands. Shure and Sennheiser both maintain current charts of frequency usage in your area. Sennheiser will print you out a customized chart -- Shure has a piece of software, the Wireless Workbench, which will include that data when it calculates a new set of compatible frequencies.
In addition, if you can, scan the environment you will be operating in. Several modern wireless microphones will perform a scan of the local RF environment, and report back on clear frequencies/interference sources. One or two of these (like the latest Shures) will even connect to a program you can run on your laptop.
Another low-end solution is an all-band scanner. I use a Uniden Bearcat, which covers all of the wireless band, and has helped me to spot intermittent interference sources (like a local pirate radio station).
Done yet? Not quite. Besides primary frequency interference, radio gear is susceptible to combined frequencies. When two radio signals of roughly equal strength enter the air, they heterodyne to create a new frequency. Every new frequency added to the environment adds new combinations, increasing the number of false signals geometrically.
In short, although you might have four wireless microphones that work without interference when tested alone, when you switch them all on at the same time one of them might suddenly start showing interference and drop-outs.
Fortunately, you don't have to do the math yourself. Shure's Wireless Workbench software will calculate possible cross-interference, as well as automatically generate a fresh set of compatible frequencies.
You need to site the receivers so they have a clear view of the talent. Salt water can trim 90 dB off your radio signal, and an actor is largely a bag of salt water. Concrete and steel can also reduce the signal strength. The antenna of most wireless packs radiate most strongly in the horizontal plane, and siting the receiving antenna too high can also lower signal strength.
All modern systems are diversity. The reasoning is this; at any point, a single radio path can be either blocked, or worse, combine with it's own out-of-phase reflection (like the ghost image on an old analog TV). When this happens, signal strength drops, perhaps below detection threshold. Diversity receivers have two antenna, and switch between them depending on which has the strongest signal. To make this work best, though, you want the antenna to be as close as possible to 90' out of phase with each other. Setting both antenna to 45', "rabbit ear" style, is a good compromise to all these different positioning needs.
Next -- actually hanging the mics on the actors!
Continued in:
The Basics of Mic'ing a Cast III
Wednesday, January 5, 2011
The Basics of Mic'ing a Cast
As I've explained in other journal entries, there is more to Sound Design for a Musical than hanging microphones on the actors. There is foldback to conductor, foldback to stage, monitors for dressing rooms, communications lines to backstage, re-enforcement of orchestra, playback of effects. And, often, the system at the theater will be non-optimal (or it won't have one at all) meaning you'll have to re-arrange speakers, equalize the house, set delay times, and otherwise build and adjust a sound system that will work properly for the show.
Be that as it may, for this post I'm considering only the subset of amplifying the talent on stage.
It starts, this time, with the cast list. If you are lucky, this is a show with a small ensemble (either there are a limited number of roles, like You're a Good Man, Charlie Brown, or the show is specifically designed around a small number of skilled singers, like Ain't Misbehaving.) The cast list, for these shows, is your mic list. Nice and simple.
Most shows, however, will have both a core cast and an ensemble; the latter have both minor speaking parts and also sing in the chorus. Your classic musical is nicely organized; Oaklahoma, for instance, has under a dozen major roles. The rest of the cast is one big chorus, who come out in different costumes for each scene and sing en masse. These shows take a little more figuring. First, assign a mic on all those major roles. Then, see if there is anything left in your inventory for the chorus.
More modern musicals like The Producers break up the ensemble into not just lots of small roles, but lots of small vocal ensembles. Instead of having a single chorus singing in SATB for the big numbers, you have trios and quartets, BOHEMIAN PEASANTS and STORMTROOPERS and USHERETTES, singing as defined groups. These ones are the real pain to figure out.
Assuming there are more names in the cast list than you have wireless microphones (which is the case more often then not), the next task is to do a French Scene breakdown. You want to chart from scene to scene, who is in the scene -- not the character, but the actor (for a single actor may be playing multiple roles.) You also want to notate if they are singing or not in this scene. This includes the ensemble, and for that, you need the help of not just the cast list, but the Director and/or Music Director.
Actually, one other person in the building knows who is what in what scene; if Cynthia L. is a VILLAGER or a NUN in scene II.2; the Costume Designer. Still, only the Music Director really knows if Cynthia is a SINGING NUN -- and even more importantly, if he wants to hear her singing there!
Because your next stop after making up the French Scenes is to find out from the Music Director or Chorus Master who his strong ensemble singers are, and as well, find out if there are people that are so off pitch it is better to diplomatically turn off their microphones during the songs. This, alas, only applies to ensemble -- as much as we might like to, the major singing roles need to be heard regardless.
The Music Director will have his own version of this neat calculus; "Henry K. really doesn't blend well, but he is the only tenor in 'How Sweet the Rose' and the harmonies just don't work without the tenor part."
If you have done all of this, you now have a massive graph that shows which actor you really want on a mic at every moment in the play. Of course, you don't have enough mics. And, of course, when you look at the chart, although the OLD MAN is only in the Prologue, and WILD SALLY doesn't show up until late in the second act, the six ORPHANS all need to sing their single verse of Christmas Song in the same damned scene that every member of the HOUSEHOLD STAFF has one line of dialog each.
You attend rehearsals if you can. Singing rehearsals too. You talk it out with Music Director and Director. And eventually you work out the tough choices...the compromises of who actually gets a mic, and when they have it, and if they have to hurriedly hand it off during a quick change in order for someone else to get it.
And you fold into this a few other things you've gained from experience. You know that actors will get sick and need the help the mic gives them. You also know that microphones will break, and at least one night you'll have to come up with a creative switch in order to cover a key song.
So you make the mic chart, and it will have compromises in it -- compromises you may have to fight out with the Director. Because like it or not there are almost never enough mics to cover everyone (and the smaller the theater you are in, the smaller your inventory is).
A few notes on that. Wireless microphones are fragile and expensive. The mean time between failures for a transmitter is several years, but for an element, as little as the run of two shows. If you have a dozen wireless microphones on a cast, at the end of a four-week run you will be purchasing at least one brand-new element -- at around $200 a pop. It is hard to get theater managements to understand, but you have to treat the elements as consumables; purchased and replaced just like batteries.
A decent unit costs about $800 a channel (that's for one transmitter, one receiver, and one microphone element). Rental for six weeks (a five week run and rehearsals) is about $600 a channel; you are better off doing without on the current show and putting the money towards purchasing instead.
Even batteries are not a small expense; if you are daring, you can get two performances out of one set of batteries, but with twenty mics in play, and most of them taking a pair of double-A's, you will spend quite a few hundred dollars on batteries before the show closes.
But there aren't really good options otherwise. Floor mics and hanging mics can get you through a few trouble spots but they can't cover a whole show. Really, the best option for when you have few mics in stock is to use them only when absolutely necessary -- on the weakest voices in the cast.
It is almost always better, outside of certain stylized vehicles like Rent, to focus on re-enforcement; don't think of amplifying everything as if it was a pop song, instead, just try to subtly boost vocal levels, and control the orchestra levels as much as possible, and treat the show as if it were entirely acoustic.
And that is long enough for one morning's essay. I will come back to this in another.
Continued with:
The Basics of Mic'ing a Cast II
Be that as it may, for this post I'm considering only the subset of amplifying the talent on stage.
It starts, this time, with the cast list. If you are lucky, this is a show with a small ensemble (either there are a limited number of roles, like You're a Good Man, Charlie Brown, or the show is specifically designed around a small number of skilled singers, like Ain't Misbehaving.) The cast list, for these shows, is your mic list. Nice and simple.
Most shows, however, will have both a core cast and an ensemble; the latter have both minor speaking parts and also sing in the chorus. Your classic musical is nicely organized; Oaklahoma, for instance, has under a dozen major roles. The rest of the cast is one big chorus, who come out in different costumes for each scene and sing en masse. These shows take a little more figuring. First, assign a mic on all those major roles. Then, see if there is anything left in your inventory for the chorus.
More modern musicals like The Producers break up the ensemble into not just lots of small roles, but lots of small vocal ensembles. Instead of having a single chorus singing in SATB for the big numbers, you have trios and quartets, BOHEMIAN PEASANTS and STORMTROOPERS and USHERETTES, singing as defined groups. These ones are the real pain to figure out.
Assuming there are more names in the cast list than you have wireless microphones (which is the case more often then not), the next task is to do a French Scene breakdown. You want to chart from scene to scene, who is in the scene -- not the character, but the actor (for a single actor may be playing multiple roles.) You also want to notate if they are singing or not in this scene. This includes the ensemble, and for that, you need the help of not just the cast list, but the Director and/or Music Director.
Actually, one other person in the building knows who is what in what scene; if Cynthia L. is a VILLAGER or a NUN in scene II.2; the Costume Designer. Still, only the Music Director really knows if Cynthia is a SINGING NUN -- and even more importantly, if he wants to hear her singing there!
Because your next stop after making up the French Scenes is to find out from the Music Director or Chorus Master who his strong ensemble singers are, and as well, find out if there are people that are so off pitch it is better to diplomatically turn off their microphones during the songs. This, alas, only applies to ensemble -- as much as we might like to, the major singing roles need to be heard regardless.
The Music Director will have his own version of this neat calculus; "Henry K. really doesn't blend well, but he is the only tenor in 'How Sweet the Rose' and the harmonies just don't work without the tenor part."
If you have done all of this, you now have a massive graph that shows which actor you really want on a mic at every moment in the play. Of course, you don't have enough mics. And, of course, when you look at the chart, although the OLD MAN is only in the Prologue, and WILD SALLY doesn't show up until late in the second act, the six ORPHANS all need to sing their single verse of Christmas Song in the same damned scene that every member of the HOUSEHOLD STAFF has one line of dialog each.
You attend rehearsals if you can. Singing rehearsals too. You talk it out with Music Director and Director. And eventually you work out the tough choices...the compromises of who actually gets a mic, and when they have it, and if they have to hurriedly hand it off during a quick change in order for someone else to get it.
And you fold into this a few other things you've gained from experience. You know that actors will get sick and need the help the mic gives them. You also know that microphones will break, and at least one night you'll have to come up with a creative switch in order to cover a key song.
So you make the mic chart, and it will have compromises in it -- compromises you may have to fight out with the Director. Because like it or not there are almost never enough mics to cover everyone (and the smaller the theater you are in, the smaller your inventory is).
A few notes on that. Wireless microphones are fragile and expensive. The mean time between failures for a transmitter is several years, but for an element, as little as the run of two shows. If you have a dozen wireless microphones on a cast, at the end of a four-week run you will be purchasing at least one brand-new element -- at around $200 a pop. It is hard to get theater managements to understand, but you have to treat the elements as consumables; purchased and replaced just like batteries.
A decent unit costs about $800 a channel (that's for one transmitter, one receiver, and one microphone element). Rental for six weeks (a five week run and rehearsals) is about $600 a channel; you are better off doing without on the current show and putting the money towards purchasing instead.
Even batteries are not a small expense; if you are daring, you can get two performances out of one set of batteries, but with twenty mics in play, and most of them taking a pair of double-A's, you will spend quite a few hundred dollars on batteries before the show closes.
But there aren't really good options otherwise. Floor mics and hanging mics can get you through a few trouble spots but they can't cover a whole show. Really, the best option for when you have few mics in stock is to use them only when absolutely necessary -- on the weakest voices in the cast.
It is almost always better, outside of certain stylized vehicles like Rent, to focus on re-enforcement; don't think of amplifying everything as if it was a pop song, instead, just try to subtly boost vocal levels, and control the orchestra levels as much as possible, and treat the show as if it were entirely acoustic.
And that is long enough for one morning's essay. I will come back to this in another.
Continued with:
The Basics of Mic'ing a Cast II
Tuesday, January 4, 2011
The Two Nations of Sound Reinforcement
I got called last month for an emergency come-in-and-FOH-our-show-the-last-sound-guy-left-suddenly. The design I was working off of was unique in several ways that I'm going to be thinking on for a while. But it did propel me to expand on my remarks earlier about there being two schools of reinforcement design.
I've seen this happen more than once: a sound person -- with lots of experience, credits in working good venues, etc. -- comes in to hang wireless mics on the cast. The first thing they do is ring out the room. The next thing they do is bring each and every member of the cast center stage, to stand there silently while the designer rings out their individual wireless mic. When every possible ounce of gain before feedback has been achieved, they crank up the compression to 4:1, and open up the mics of every person on stage in each scene, holding them all just below feedback threshold.
This means earsplitting levels and a crunchy, distorted sound, with that distinctive tinny ring that comes from mics driven right below the feedback threshold. This means dialog is too loud, singing may still be too soft, ensembles never blend, and the few cast members who either didn't get a mic or had a mic go dead on them are completely inaudible. As the show continues, hearing fatigue sets in and the sound seems to get softer and softer and softer just as the show should be getting bigger and bigger.
You can probably guess I don't belong to that school.
In my opinion, the sole advantage of that method, is that it (temporarily) shuts up the people who want you to "turn it up" because they can't hear something. It also satisfies those occasional Upper Management visitors who give you a quiet but firm; "The sponsors are in the audience tonight, and they'd like to hear how good that $60,000 dollar sound system they bought for us works. So you could show it off a little, hmmm?" (By which they mean -- loud is good, so make it loud.)
Of course, since you opened the show with cranking levels, you've already shot your wad. There is no place to go if a louder song comes along, or a singer gets a dry throat. Or, as the evening wears on, what had been loud begins in context to feel quiet.
Hearing fatigue is a very real thing. High frequencies fatigue faster; meaning as exposure to high-decibel sound goes on, the sound begins to sound increasingly heavy and dull and needs to have high frequencies boosted to sound normal again. Which is why experienced mixers keep levels moderate and stop every few hours to recover their ears, less they create mixes which are tinny and obnoxious.
There are two illusions going on here; two very familiar illusions that crop up all the time in my work, and are often at the base of the anti-conspiracy theory writings I do. The first is that human senses are calibrated. The second is that sensory information can be collapsed into a single quantity. But they both can be brought under the header of "the senses don't lie." Which, in fact, they do...constantly and creatively!
Let's unpack the former illusion first; the one that senses are calibrated (i.e. "Loud" always seems "loud," and "red" always seems "red.") Everyone should have had the experience at some point in their lives of wearing colored sunglasses. A modern version I've been enjoying is using a red LED to see, and even a time or two to read in bed! After forty five minutes or so, the automatic color-correction of the human eye has made that red light more of a dark amber (it can't QUITE compensate enough to perceive it as fully white). But when I switched it off and turned on the room lights, the white walls and fixtures were, for a few minutes, the most lovely blue-green.
The human eye automatically and constantly adjusts its white point, and so subtly you are rarely aware of it. You "see" a sheet of white paper as being white out in the noontime sun, and at night under an incandescent desk lamp, even though the color temperature of those two sources are some two thousand degrees Kelvin apart. Indeed, to make that sheet of paper appear "yellow" in your perception you have to drop down to the color temperature of a candle -- an additional thousand Kelvin below sunlight.
Film photographers may remember "tungsten" film versus "daylight" film, which was the retail version of color balancing film stock towards the lighting environment. Shoot indoors on daylight film, and the film would faithfully record a yellow-orange world. But since it wasn't the world our color-adapted eyes had seen, we cheat the film to record something closer to what we perceived. (In the cinema world the choices, and the filters, were much more complex).
(This, by the way, is but ONE of the reasons why it is so insanely difficult to answer "What color is the Martian sky?" The scientifically illiterate have this illusion that color is color, and NASA can either print the right color on a photograph, or print the wrong color for some nefarious purpose of their own.)
But back to the subject. The accompanying illusion is that our perceptual tools are linear. That is, that twice the power into an amp will deliver a sound that feels twice as loud, and, that two sounds of equal power-into-amp will be perceived as equally loud. This is also not true. Our perception of loudness is moderated in part by a hard-wired expectation in our ears of a certain balance of sounds. A similar rule is what makes a super-bright "White" LED look white. It is actually nothing of the kind. But the LED provides peaks that, in any ordinary light would be part of a white-light spectrum. Our brain then interprets this as being a black-body curve of some characteristic color temperature.
(A very peculiar related illusion has to do with clock chimes of the old rectangular-bar style. You see, any musical tone is made up of a fundamental frequency, and the even and odd harmonics of that frequency. The characteristic timbre of a sound comes from the mix of these harmonics. Well, in the case of the clock chimes, the vibrational modes of the rectangular chime are perceived by the ear as being, not fundamental tones, but as harmonics of a fundamental. The brain fills in this "missing" fundamental, and what you perceive is a lower pitch than would be possible for a six-inch long chunk of brass.)
The non-linear response across frequency in the ear is what gave rise to the old "loudness" knobs on consumer stereos; this knob would selectively increase frequencies in the bass and treble range that were characteristic of the profile of a louder sound. The brain, fooled, would perceive the sound as "louder" even though the power requirements of the amp had only changed marginally.
(And, yes, I'm over-simplifying here to make a point).
These illusions are the bane of my existence as lighting and sound designer. I have, through experience and training and knowledge, the ability to compensate intellectually for the perceptual effects. My clients not only lack these tools, they are unaware of the need of these tools.
I have had directors come in at noon and complain that I changed the lights -- because, "Last night it looked great but today everything is all yellow." And I have had similar conversations about sound. Most people are simply incapable of understanding when they don't understand; as far as they are concerned, the light is yellow, or the sound is loud, and it is so obvious it can't possibly wrong (and any explanations are simply obfuscation).
To add one more complexity to the mix, the primary goal in sound reinforcement, especially for vocals (stage singing and acting), is intelligibility. Intelligibility is not volume! This doesn't work for the stereotypical American tourist shouting English words at the poor local, and it doesn't work when the audience is wrinkling their brows trying to figure out why a Plane in Maine rests lightly on the Brain.
Professionals have formula, and a specialist vocabulary for describing intelligibility and the factors that impact it. I'm going to try to go for a simpler, plain-language explanation here. To wit: intelligibility is dependent on;
1) Sufficient sound pressure in the frequencies that carry useful information (aka, vowel sounds, 500 to 1K, consonants, 2-4K, fricatives and sibulants contain energy in 6-8K. POTS -- the Plain Old Telephone System -- achieved voice intelligibility with a frequency range of 300 to 3400 Hz).
2) Lack of significant distortion in terms of time/phase smearing of the information in these frequencies.
3) Lack of competing material, both in the same frequency band, in the same time domain, or out of band and either generally too powerful and/or too busy, or of a nature (strong single pitches, particularly at neighboring frequencies) as to cause acoustic masking. (The latter, to simplify again; given a strong signal peak, the ear actually shuts down activity in the neighboring bands.)
One of the prime factors against intelligibility in a sound reinforcement situation is reverberation, and the similar effect of signal path delays. Simply put, both of these put the voice in competition with itself; with another complex signal sharing the same time and frequency domain.
(Actually, our brains are very good at summing information that arrives within a certain window of time. If a person is talking at you in an ordinary room, chances are a good half of the acoustic energy reaching your ears is not directly from their mouth, but has followed various reflection paths. Within 20 milliseconds or so, the brain can sum all of these reflections up to create for itself a stronger and better defined original; actually enhancing the reception of the sound. Outside of that window, the reflections begin to be treated as a second, competing sound instead.)
This is why amplification is not a panacea. As you amplify the voice, you also bring more of the reverberation around the room into perceptual range. Also, your sound system has various delays, acoustic and electronic, built into it. The end result is to smear the voice across time and the make it harder to pick out the important information at the same time you've boosted that information. As you increase volume further, you start to fall into a circle of death, where each increase only makes the voice harder to understand. (Among all the various effects are secondary vibrations brought out by the strong signal; from vibrating wall panels and fixtures, to tiny waves inside the inner ear itself, until it shuts down to protect itself. A sound that is loud enough becomes distorted inside the human ear itself.)
The "other" school of reinforcement design only exacerbates this problem by using severe EQ to remove those same frequencies you wanted in the first place! So instead of enhancing the 2KHz range that would lead to a more intelligible voice, you are boosting 200Hz rumble and 8Khz sizzle that act to mask the very frequencies you need to understand the words of the singing and dialog. The extreme levels, and the ringing of near-feedback, only ensure that aural fatigue will set in quickly, and the ears of the listeners become increasingly less able to pick out the voices you purportedly were reinforcing.
The saddest part is that client, directors, sponsors, audience members will all strain to make out words (and largely fail), but will never think of blaming the sound designer because the pain in their ears is all the evidence they need that the sound is nice and loud.
I think there is a reason I am getting asked back to a growing list of organizations. I've been walking into theaters, often when there is clear evidence of the "volume at any cost" school having been working there before, and I've made a sound that made people happy.
Not at first. Oh, the arguments at first, when people realize their ears aren't hurting, and decide that the sound man must be Doing Something Wrong. And when it seems so obvious that something needs to be turned up, and I put my back up and don't let them do it (finding instead some other way, generally through corrective EQ and the lowering of competing levels), to achieve what had been desired.
Yes, it's the simple answer to turn it up. If you can't hear the guitar, then "turn it up." Well, a smart mixer will first look to see why the guitar isn't speaking. Is it improperly EQ'd? Does it not sound like a guitar? Is the sound muddy and unfocused? Are those frequencies that give it its distinctive timbre being suppressed?
And if it is a good guitar sound but still not speaking, you look for ways not just to power through the rest of the band, but to sidestep it. Perhaps a little panning will bring it into a place on the soundstage where it will speak. Perhaps a little EQ will bring out frequencies for which there is a gap it can speak through.
Usually, you end up tweaking a couple of things. Maybe it's clashing with the piano. So pan the piano one way, the guitar the other way. Nudge the guitar at 400Hz (for that "woody" sound) and 12KHz (for the finger noise), and squish the piano so it is mostly speaking in the 1-4KHz mid-range.
Sorry for the seeming digression. The same kinds of things can also be done to let the clarity of the singer come through the acoustical environment.
To my mind, the first step is to make the singer sound good. And sound like themselves. Instead of a mic that is rung out to the ninth degree, start flat, roll off the unneeded bass (when I can, I brick-wall somewhere around 125 Hz. Many boards have a more sloping filter available, and the roll-off might start as high as 400 Hz to get the desired effect down in the 100's) and maybe a little off the top, where little but sibulants and feedback live. Run it though a basically flat system, at a gentle level, and tweak the mic just a little to bring out the individual character of the singer.
With that for a starting point, you don't have to run levels up to the sky in order to carry along the broken fragments of the intelligibility frequencies you wanted to boost. You don't have to ramp up the levels until you've saturated the house, reverberation time has stretched into the tens of seconds, and all the lighting instruments on the catwalk are rattling in sympathetic vibration.
The downside is you don't have quite as much sheer gain. You do get more bang for your buck, though; what gain you have gets you much more in intelligibility. And nothing prevents you from notching out a few of the worst offenders -- just so long as you don't go so far it totally changes the character of the voice.
And, well, this essay has gotten long enough. Perhaps later I'll go into more detail about the "other" other school; the one that believes in enhancing the natural sound of the person or instrument being mic'd, and controlling the entire sonic environment, rather than reaching for dramatic EQ as a way to win a volume war.
I've seen this happen more than once: a sound person -- with lots of experience, credits in working good venues, etc. -- comes in to hang wireless mics on the cast. The first thing they do is ring out the room. The next thing they do is bring each and every member of the cast center stage, to stand there silently while the designer rings out their individual wireless mic. When every possible ounce of gain before feedback has been achieved, they crank up the compression to 4:1, and open up the mics of every person on stage in each scene, holding them all just below feedback threshold.
This means earsplitting levels and a crunchy, distorted sound, with that distinctive tinny ring that comes from mics driven right below the feedback threshold. This means dialog is too loud, singing may still be too soft, ensembles never blend, and the few cast members who either didn't get a mic or had a mic go dead on them are completely inaudible. As the show continues, hearing fatigue sets in and the sound seems to get softer and softer and softer just as the show should be getting bigger and bigger.
You can probably guess I don't belong to that school.
In my opinion, the sole advantage of that method, is that it (temporarily) shuts up the people who want you to "turn it up" because they can't hear something. It also satisfies those occasional Upper Management visitors who give you a quiet but firm; "The sponsors are in the audience tonight, and they'd like to hear how good that $60,000 dollar sound system they bought for us works. So you could show it off a little, hmmm?" (By which they mean -- loud is good, so make it loud.)
Of course, since you opened the show with cranking levels, you've already shot your wad. There is no place to go if a louder song comes along, or a singer gets a dry throat. Or, as the evening wears on, what had been loud begins in context to feel quiet.
Hearing fatigue is a very real thing. High frequencies fatigue faster; meaning as exposure to high-decibel sound goes on, the sound begins to sound increasingly heavy and dull and needs to have high frequencies boosted to sound normal again. Which is why experienced mixers keep levels moderate and stop every few hours to recover their ears, less they create mixes which are tinny and obnoxious.
There are two illusions going on here; two very familiar illusions that crop up all the time in my work, and are often at the base of the anti-conspiracy theory writings I do. The first is that human senses are calibrated. The second is that sensory information can be collapsed into a single quantity. But they both can be brought under the header of "the senses don't lie." Which, in fact, they do...constantly and creatively!
Let's unpack the former illusion first; the one that senses are calibrated (i.e. "Loud" always seems "loud," and "red" always seems "red.") Everyone should have had the experience at some point in their lives of wearing colored sunglasses. A modern version I've been enjoying is using a red LED to see, and even a time or two to read in bed! After forty five minutes or so, the automatic color-correction of the human eye has made that red light more of a dark amber (it can't QUITE compensate enough to perceive it as fully white). But when I switched it off and turned on the room lights, the white walls and fixtures were, for a few minutes, the most lovely blue-green.
The human eye automatically and constantly adjusts its white point, and so subtly you are rarely aware of it. You "see" a sheet of white paper as being white out in the noontime sun, and at night under an incandescent desk lamp, even though the color temperature of those two sources are some two thousand degrees Kelvin apart. Indeed, to make that sheet of paper appear "yellow" in your perception you have to drop down to the color temperature of a candle -- an additional thousand Kelvin below sunlight.
Film photographers may remember "tungsten" film versus "daylight" film, which was the retail version of color balancing film stock towards the lighting environment. Shoot indoors on daylight film, and the film would faithfully record a yellow-orange world. But since it wasn't the world our color-adapted eyes had seen, we cheat the film to record something closer to what we perceived. (In the cinema world the choices, and the filters, were much more complex).
(This, by the way, is but ONE of the reasons why it is so insanely difficult to answer "What color is the Martian sky?" The scientifically illiterate have this illusion that color is color, and NASA can either print the right color on a photograph, or print the wrong color for some nefarious purpose of their own.)
But back to the subject. The accompanying illusion is that our perceptual tools are linear. That is, that twice the power into an amp will deliver a sound that feels twice as loud, and, that two sounds of equal power-into-amp will be perceived as equally loud. This is also not true. Our perception of loudness is moderated in part by a hard-wired expectation in our ears of a certain balance of sounds. A similar rule is what makes a super-bright "White" LED look white. It is actually nothing of the kind. But the LED provides peaks that, in any ordinary light would be part of a white-light spectrum. Our brain then interprets this as being a black-body curve of some characteristic color temperature.
(A very peculiar related illusion has to do with clock chimes of the old rectangular-bar style. You see, any musical tone is made up of a fundamental frequency, and the even and odd harmonics of that frequency. The characteristic timbre of a sound comes from the mix of these harmonics. Well, in the case of the clock chimes, the vibrational modes of the rectangular chime are perceived by the ear as being, not fundamental tones, but as harmonics of a fundamental. The brain fills in this "missing" fundamental, and what you perceive is a lower pitch than would be possible for a six-inch long chunk of brass.)
The non-linear response across frequency in the ear is what gave rise to the old "loudness" knobs on consumer stereos; this knob would selectively increase frequencies in the bass and treble range that were characteristic of the profile of a louder sound. The brain, fooled, would perceive the sound as "louder" even though the power requirements of the amp had only changed marginally.
(And, yes, I'm over-simplifying here to make a point).
These illusions are the bane of my existence as lighting and sound designer. I have, through experience and training and knowledge, the ability to compensate intellectually for the perceptual effects. My clients not only lack these tools, they are unaware of the need of these tools.
I have had directors come in at noon and complain that I changed the lights -- because, "Last night it looked great but today everything is all yellow." And I have had similar conversations about sound. Most people are simply incapable of understanding when they don't understand; as far as they are concerned, the light is yellow, or the sound is loud, and it is so obvious it can't possibly wrong (and any explanations are simply obfuscation).
To add one more complexity to the mix, the primary goal in sound reinforcement, especially for vocals (stage singing and acting), is intelligibility. Intelligibility is not volume! This doesn't work for the stereotypical American tourist shouting English words at the poor local, and it doesn't work when the audience is wrinkling their brows trying to figure out why a Plane in Maine rests lightly on the Brain.
Professionals have formula, and a specialist vocabulary for describing intelligibility and the factors that impact it. I'm going to try to go for a simpler, plain-language explanation here. To wit: intelligibility is dependent on;
1) Sufficient sound pressure in the frequencies that carry useful information (aka, vowel sounds, 500 to 1K, consonants, 2-4K, fricatives and sibulants contain energy in 6-8K. POTS -- the Plain Old Telephone System -- achieved voice intelligibility with a frequency range of 300 to 3400 Hz).
2) Lack of significant distortion in terms of time/phase smearing of the information in these frequencies.
3) Lack of competing material, both in the same frequency band, in the same time domain, or out of band and either generally too powerful and/or too busy, or of a nature (strong single pitches, particularly at neighboring frequencies) as to cause acoustic masking. (The latter, to simplify again; given a strong signal peak, the ear actually shuts down activity in the neighboring bands.)
One of the prime factors against intelligibility in a sound reinforcement situation is reverberation, and the similar effect of signal path delays. Simply put, both of these put the voice in competition with itself; with another complex signal sharing the same time and frequency domain.
(Actually, our brains are very good at summing information that arrives within a certain window of time. If a person is talking at you in an ordinary room, chances are a good half of the acoustic energy reaching your ears is not directly from their mouth, but has followed various reflection paths. Within 20 milliseconds or so, the brain can sum all of these reflections up to create for itself a stronger and better defined original; actually enhancing the reception of the sound. Outside of that window, the reflections begin to be treated as a second, competing sound instead.)
This is why amplification is not a panacea. As you amplify the voice, you also bring more of the reverberation around the room into perceptual range. Also, your sound system has various delays, acoustic and electronic, built into it. The end result is to smear the voice across time and the make it harder to pick out the important information at the same time you've boosted that information. As you increase volume further, you start to fall into a circle of death, where each increase only makes the voice harder to understand. (Among all the various effects are secondary vibrations brought out by the strong signal; from vibrating wall panels and fixtures, to tiny waves inside the inner ear itself, until it shuts down to protect itself. A sound that is loud enough becomes distorted inside the human ear itself.)
The "other" school of reinforcement design only exacerbates this problem by using severe EQ to remove those same frequencies you wanted in the first place! So instead of enhancing the 2KHz range that would lead to a more intelligible voice, you are boosting 200Hz rumble and 8Khz sizzle that act to mask the very frequencies you need to understand the words of the singing and dialog. The extreme levels, and the ringing of near-feedback, only ensure that aural fatigue will set in quickly, and the ears of the listeners become increasingly less able to pick out the voices you purportedly were reinforcing.
The saddest part is that client, directors, sponsors, audience members will all strain to make out words (and largely fail), but will never think of blaming the sound designer because the pain in their ears is all the evidence they need that the sound is nice and loud.
I think there is a reason I am getting asked back to a growing list of organizations. I've been walking into theaters, often when there is clear evidence of the "volume at any cost" school having been working there before, and I've made a sound that made people happy.
Not at first. Oh, the arguments at first, when people realize their ears aren't hurting, and decide that the sound man must be Doing Something Wrong. And when it seems so obvious that something needs to be turned up, and I put my back up and don't let them do it (finding instead some other way, generally through corrective EQ and the lowering of competing levels), to achieve what had been desired.
Yes, it's the simple answer to turn it up. If you can't hear the guitar, then "turn it up." Well, a smart mixer will first look to see why the guitar isn't speaking. Is it improperly EQ'd? Does it not sound like a guitar? Is the sound muddy and unfocused? Are those frequencies that give it its distinctive timbre being suppressed?
And if it is a good guitar sound but still not speaking, you look for ways not just to power through the rest of the band, but to sidestep it. Perhaps a little panning will bring it into a place on the soundstage where it will speak. Perhaps a little EQ will bring out frequencies for which there is a gap it can speak through.
Usually, you end up tweaking a couple of things. Maybe it's clashing with the piano. So pan the piano one way, the guitar the other way. Nudge the guitar at 400Hz (for that "woody" sound) and 12KHz (for the finger noise), and squish the piano so it is mostly speaking in the 1-4KHz mid-range.
Sorry for the seeming digression. The same kinds of things can also be done to let the clarity of the singer come through the acoustical environment.
To my mind, the first step is to make the singer sound good. And sound like themselves. Instead of a mic that is rung out to the ninth degree, start flat, roll off the unneeded bass (when I can, I brick-wall somewhere around 125 Hz. Many boards have a more sloping filter available, and the roll-off might start as high as 400 Hz to get the desired effect down in the 100's) and maybe a little off the top, where little but sibulants and feedback live. Run it though a basically flat system, at a gentle level, and tweak the mic just a little to bring out the individual character of the singer.
With that for a starting point, you don't have to run levels up to the sky in order to carry along the broken fragments of the intelligibility frequencies you wanted to boost. You don't have to ramp up the levels until you've saturated the house, reverberation time has stretched into the tens of seconds, and all the lighting instruments on the catwalk are rattling in sympathetic vibration.
The downside is you don't have quite as much sheer gain. You do get more bang for your buck, though; what gain you have gets you much more in intelligibility. And nothing prevents you from notching out a few of the worst offenders -- just so long as you don't go so far it totally changes the character of the voice.
And, well, this essay has gotten long enough. Perhaps later I'll go into more detail about the "other" other school; the one that believes in enhancing the natural sound of the person or instrument being mic'd, and controlling the entire sonic environment, rather than reaching for dramatic EQ as a way to win a volume war.
Monday, January 3, 2011
Composition for the Stage
How does one compose music for a play? This essay will likely be short and rambling; I know only a little of music theory, music history, and ethnomusicology, and although I have composed for many plays, the number of shows I've composed complete original scores for is very small.
Generalized work method first. I watch rehearsals, listen to the actors to get a sense of the pace and timbre of their voices, look at costume plots and set renderings to get an idea of color, complexity, and so forth. And talk to the director. A lot. If the show is a period piece, I believe in immersion; listening to music of the period day in and day out, until I start to think a little in those patterns and styles.
As a general rule, slavish imitation of a style or period is out. First off, it may not fit the needs of the play as well that way. Secondly, you probably don't have the skills to carry it off. So you are better off doing something that carries the feeling of a musical period or idiom but is otherwise written as music for the theater (and the needs of that specific production).
Compositional tools: Melodic is attractive. A full melody is the most powerful way of linking an idea in the play to a specific musical shape; a full melody reaches deepest into the emotions of the listener. The downside to melodic writing is it forces you to write in full measures. This can prove difficult if you only have eight seconds to cover a scene change, or if you have to trim or increase the length of a composed piece by an odd fraction of a measure when the timing of the scene changes in rehearsal.
Motivic writing sacrifices some of the power of full melody in return for much greater flexibility. A motive can be stated in a very brief time. Motives can also be inverted, convoluted, sequenced, contrasted; they can develop in fresh musical ways while still retaining their recognizable character.
The great strength of motivic writing, besides the ability to write to any length, any tempo, and any thickness of texture, is that you can develop them; show the same motive in heroic light, as a comedic stumble, in a sad minor key, etc. The downside, besides the lessened emotional impact compared to a full melody, is that fewer of the audience will recognize the motive implicitly (as they will a melody). For many in the audience, the motive will simply read to them as "some music is playing."
Of course a motive can be extracted from a melody, or a melody constructed from a motive. This is the best of both worlds. When I write motives, I try to say something about a character; small and every-retreating steps for a shy character, bombastic leaps and little flourishes for a Cyrano, a sighing motif for a love interest.
Loops. I'm not speaking here of store-bought loops, like Garage Band packages or Fruity Loops, I mean fully-composed chunks of music that are designed to repeat. Both motivic and melodic writing allows loops; the former are shorter and more controllable, however. Most music software can handle multiple loops of sequenced material. Some can even apply such things as changes of instrumentation or changes of key. Thinking in loops isn't really that different from thinking in motives or melody; it is mostly thinking about music in terms of interchangeable parts that can be switched, duplicated, etc. as needed to fit the time given for the finished piece of music.
(A friend of mine with many more theatrical compositions under his belt uses loops like this, and I see the results in his hands. My tendency is to develop, instead, and to use bridges and repeats to fill those extra spaces that turn up during the rehearsal process).
Rhythmic writing. In this method of attack, you establish a beat, and the musical material on top of it is secondary. I wrote most of Agamemnon this way, precisely to avoid the problems of cutting music too tightly to action. Since nothing went outside the framework of a one-bar figure, I could fade out at any arbitrary point. Arpeggios, pedal points, ostinati are the classical tools here; drum and bass are the modern ones. Pads are more-or-less arrhythmic ways to provide a texture.
Textural. In this method, you eschew traditional melodic development, and even classical harmony, and concentrate on the pure sound. With classical instrumentation, it helps to write in a whole-tone scale; that way, no note is ever a "leading" tone, and no cadence ever resolves. Instead the music just goes on, a tapestry of shifting textures. Textural is of course equally effective using synthesizer pads, found sounds, and the like.
One interesting advantage to the more open forms, like rhythmic or textural writing, is that you can write to cross-fade or even layer on other material as the action of the play evolves. Against a softly murmuring flow of woodwinds, a spiky clarinet cam signal the appearance of Puck; and a thickening of the ongoing woodwind textures with strings the effects of the love potion.
Software is making possible the triggering of moments like this so they fall in tempo. As of the last show I used this idea in, however, I had to use my own ears and sense of rhythm to hit the beat!
Already, there are software options where a key change could be executed by operator control in the midst of playback, or the instrumentation changed, or the tempo altered. A program called "OrchExtra" is already in use replacing pit orchestras (a mixed blessing indeed); OrchExtra allows a conductor the same control over tempo, including fermata and vamping, as they have with a live orchestra. These sorts of tools are increasingly within reach of the sound designer as well.
Writing for plays is like writing for Television and the Movies, and unlike writing for the Concert Hall. Ideas should be, as a rule, simple, clearly expressed, broadly emotional. Say Scene Three ended with Susan alone and crying at an empty table, thinking she'll never see Rick again, and Scene Four opens at their wedding. In the eight seconds it takes for the stage crew to push around the scenery and the chorus in their wedding outfits to get on stage, you've got to carry the audience through this emotional change, AND let them know it is Susan getting wed (as she's still in quick-change for the first five minutes of the new scene). You can't do a recapitulation, then a nice segue, then a full statement of Susan's theme; you haven't got the time. Sometimes you have to let subtlety fall by the wayside and get right to business with the starkly obvious -- by playing Wagner's wedding march, for instance! (Although you might chose to play it in a minor key, or destroy the closure of the cadence...)
When you are writing under dialog, it has to stay away from the voice. Percussive sounds like guitar and piano work well. Sustained, vocal sounds like flute and clarinet, not so much. When possible, stay out of the pitch range as well; 'cello below the voice, high violins above the voice, but nothing in the same pitch range as the voice.
In the movies, the music is finished and cut in after the dialog is recorded. They can get away with brass and drums in the middle of a speech; because they bring those in during the natural gaps in the speech. We can't do that. At least, not yet!
Which brings us to timing.
Assume you do want to "Mickey Mouse" a scene. My composer friend, for instance, got to score a sword fight as if it was an Errol Flynn movie. There are two ways to go here; the music follows the actor, or the actor follows the music.
Philosophically, I dislike the second. I prefer the actor to be able to adjust their performance to the moment. An audience's energy changes from night to night. An actor develops a scene through the run of the performance, finding new places to express something a little more deeply. Forcing them to hit the chalk marks at certain words or actions is, I think, a disservice to the production. Be that as it may; if you chose this route, build in musical moments for the actor to hear. Don't ask them to count bars. Ask them to make the cross when they hear the tuba.
For the former, lacking sophisticated playback methods, there are two methods which you can apply, singly or in combination. The first is keeping the timing loose. I made the tight changes in my Play it Again, Sam monologue happen near the start of the cue. The rest of the cue got progressively looser, ending on a vamp that could be faded arbitrarily when the actor finally reached the end of his speech. The second is building in "catch-up" points, which is what my friend did for his sword fight. He wrote in a long sustained note that would happen close to when the fighters were in a blade lock, then the operator would take the next cue, ending the fermata, when the lock broke.
Increasingly, instead of having to move from one CD cue to another, you can layer audio tracks from playback software. So in the example above, you could do the fermata as described...but you could also add an a-tempo glissando for the moment Barrymore swings on the chandelier. Depending on the skill of the sound operator and the sophistication of the design, it can sound as if it is just one complete and contiguous piece of music, not several different things being layered and spliced.
A tangential remark to the above. It used to be, Sound Operator was a technical job. You got someone with lots of skills and training in not just running the complex sound equipment, but in maintaining it. As technology has given us simpler, and usually more robust, playback mechanisms it has also taken away; sound operators in smaller theaters now tend to be under-trained volunteers. Or there is no operator at all; the Stage Manager will take sound cues with their free hand (while watching the stage, calling light cues, and following along in their book).
Oddly enough, the place you get the best opportunities for creative application of sound is in the musical; most musicals even in smaller venues amplify the actors (usually with wireless mics). This means a skilled operator mixing (and maintaining) those mics. With canned music ranging from CDs to OrchExtra (as happens in the more budget-conscious productions), you also have a trained musician available. This means that adding in sounds or musical material that require a comfort with technology and good musical sense is possible in those venues, where it may not be possible in a "straight" play.
Really, at this point in the game, if you want to do a Mickey-Moused, evolving, tightly-following musical design for a straight play, you either need to be designing at a major regional theater -- or you are going to be operating the sound board yourself.
Generalized work method first. I watch rehearsals, listen to the actors to get a sense of the pace and timbre of their voices, look at costume plots and set renderings to get an idea of color, complexity, and so forth. And talk to the director. A lot. If the show is a period piece, I believe in immersion; listening to music of the period day in and day out, until I start to think a little in those patterns and styles.
As a general rule, slavish imitation of a style or period is out. First off, it may not fit the needs of the play as well that way. Secondly, you probably don't have the skills to carry it off. So you are better off doing something that carries the feeling of a musical period or idiom but is otherwise written as music for the theater (and the needs of that specific production).
Compositional tools: Melodic is attractive. A full melody is the most powerful way of linking an idea in the play to a specific musical shape; a full melody reaches deepest into the emotions of the listener. The downside to melodic writing is it forces you to write in full measures. This can prove difficult if you only have eight seconds to cover a scene change, or if you have to trim or increase the length of a composed piece by an odd fraction of a measure when the timing of the scene changes in rehearsal.
Motivic writing sacrifices some of the power of full melody in return for much greater flexibility. A motive can be stated in a very brief time. Motives can also be inverted, convoluted, sequenced, contrasted; they can develop in fresh musical ways while still retaining their recognizable character.
The great strength of motivic writing, besides the ability to write to any length, any tempo, and any thickness of texture, is that you can develop them; show the same motive in heroic light, as a comedic stumble, in a sad minor key, etc. The downside, besides the lessened emotional impact compared to a full melody, is that fewer of the audience will recognize the motive implicitly (as they will a melody). For many in the audience, the motive will simply read to them as "some music is playing."
Of course a motive can be extracted from a melody, or a melody constructed from a motive. This is the best of both worlds. When I write motives, I try to say something about a character; small and every-retreating steps for a shy character, bombastic leaps and little flourishes for a Cyrano, a sighing motif for a love interest.
Loops. I'm not speaking here of store-bought loops, like Garage Band packages or Fruity Loops, I mean fully-composed chunks of music that are designed to repeat. Both motivic and melodic writing allows loops; the former are shorter and more controllable, however. Most music software can handle multiple loops of sequenced material. Some can even apply such things as changes of instrumentation or changes of key. Thinking in loops isn't really that different from thinking in motives or melody; it is mostly thinking about music in terms of interchangeable parts that can be switched, duplicated, etc. as needed to fit the time given for the finished piece of music.
(A friend of mine with many more theatrical compositions under his belt uses loops like this, and I see the results in his hands. My tendency is to develop, instead, and to use bridges and repeats to fill those extra spaces that turn up during the rehearsal process).
Rhythmic writing. In this method of attack, you establish a beat, and the musical material on top of it is secondary. I wrote most of Agamemnon this way, precisely to avoid the problems of cutting music too tightly to action. Since nothing went outside the framework of a one-bar figure, I could fade out at any arbitrary point. Arpeggios, pedal points, ostinati are the classical tools here; drum and bass are the modern ones. Pads are more-or-less arrhythmic ways to provide a texture.
Textural. In this method, you eschew traditional melodic development, and even classical harmony, and concentrate on the pure sound. With classical instrumentation, it helps to write in a whole-tone scale; that way, no note is ever a "leading" tone, and no cadence ever resolves. Instead the music just goes on, a tapestry of shifting textures. Textural is of course equally effective using synthesizer pads, found sounds, and the like.
One interesting advantage to the more open forms, like rhythmic or textural writing, is that you can write to cross-fade or even layer on other material as the action of the play evolves. Against a softly murmuring flow of woodwinds, a spiky clarinet cam signal the appearance of Puck; and a thickening of the ongoing woodwind textures with strings the effects of the love potion.
Software is making possible the triggering of moments like this so they fall in tempo. As of the last show I used this idea in, however, I had to use my own ears and sense of rhythm to hit the beat!
Already, there are software options where a key change could be executed by operator control in the midst of playback, or the instrumentation changed, or the tempo altered. A program called "OrchExtra" is already in use replacing pit orchestras (a mixed blessing indeed); OrchExtra allows a conductor the same control over tempo, including fermata and vamping, as they have with a live orchestra. These sorts of tools are increasingly within reach of the sound designer as well.
Writing for plays is like writing for Television and the Movies, and unlike writing for the Concert Hall. Ideas should be, as a rule, simple, clearly expressed, broadly emotional. Say Scene Three ended with Susan alone and crying at an empty table, thinking she'll never see Rick again, and Scene Four opens at their wedding. In the eight seconds it takes for the stage crew to push around the scenery and the chorus in their wedding outfits to get on stage, you've got to carry the audience through this emotional change, AND let them know it is Susan getting wed (as she's still in quick-change for the first five minutes of the new scene). You can't do a recapitulation, then a nice segue, then a full statement of Susan's theme; you haven't got the time. Sometimes you have to let subtlety fall by the wayside and get right to business with the starkly obvious -- by playing Wagner's wedding march, for instance! (Although you might chose to play it in a minor key, or destroy the closure of the cadence...)
When you are writing under dialog, it has to stay away from the voice. Percussive sounds like guitar and piano work well. Sustained, vocal sounds like flute and clarinet, not so much. When possible, stay out of the pitch range as well; 'cello below the voice, high violins above the voice, but nothing in the same pitch range as the voice.
In the movies, the music is finished and cut in after the dialog is recorded. They can get away with brass and drums in the middle of a speech; because they bring those in during the natural gaps in the speech. We can't do that. At least, not yet!
Which brings us to timing.
Assume you do want to "Mickey Mouse" a scene. My composer friend, for instance, got to score a sword fight as if it was an Errol Flynn movie. There are two ways to go here; the music follows the actor, or the actor follows the music.
Philosophically, I dislike the second. I prefer the actor to be able to adjust their performance to the moment. An audience's energy changes from night to night. An actor develops a scene through the run of the performance, finding new places to express something a little more deeply. Forcing them to hit the chalk marks at certain words or actions is, I think, a disservice to the production. Be that as it may; if you chose this route, build in musical moments for the actor to hear. Don't ask them to count bars. Ask them to make the cross when they hear the tuba.
For the former, lacking sophisticated playback methods, there are two methods which you can apply, singly or in combination. The first is keeping the timing loose. I made the tight changes in my Play it Again, Sam monologue happen near the start of the cue. The rest of the cue got progressively looser, ending on a vamp that could be faded arbitrarily when the actor finally reached the end of his speech. The second is building in "catch-up" points, which is what my friend did for his sword fight. He wrote in a long sustained note that would happen close to when the fighters were in a blade lock, then the operator would take the next cue, ending the fermata, when the lock broke.
Increasingly, instead of having to move from one CD cue to another, you can layer audio tracks from playback software. So in the example above, you could do the fermata as described...but you could also add an a-tempo glissando for the moment Barrymore swings on the chandelier. Depending on the skill of the sound operator and the sophistication of the design, it can sound as if it is just one complete and contiguous piece of music, not several different things being layered and spliced.
A tangential remark to the above. It used to be, Sound Operator was a technical job. You got someone with lots of skills and training in not just running the complex sound equipment, but in maintaining it. As technology has given us simpler, and usually more robust, playback mechanisms it has also taken away; sound operators in smaller theaters now tend to be under-trained volunteers. Or there is no operator at all; the Stage Manager will take sound cues with their free hand (while watching the stage, calling light cues, and following along in their book).
Oddly enough, the place you get the best opportunities for creative application of sound is in the musical; most musicals even in smaller venues amplify the actors (usually with wireless mics). This means a skilled operator mixing (and maintaining) those mics. With canned music ranging from CDs to OrchExtra (as happens in the more budget-conscious productions), you also have a trained musician available. This means that adding in sounds or musical material that require a comfort with technology and good musical sense is possible in those venues, where it may not be possible in a "straight" play.
Really, at this point in the game, if you want to do a Mickey-Moused, evolving, tightly-following musical design for a straight play, you either need to be designing at a major regional theater -- or you are going to be operating the sound board yourself.
Sunday, January 2, 2011
Ninja Blues
I'm tired of being ninja. Oh, I don't mean the killing-people part. Or even the cool weapons part. I mean the wearing black and sneaking around part. I mean always having to be invisible.
I've spent most of my working life in theater. (I was also spending a LOT of time in theater through high school and college). During an actual show call, that basically means wearing black, hiding in the dark, being careful not to make a noise or shine a light. It means walking softly, carefully, on the balls of your feet (there's a lot to trip on anyhow), putting down tools carefully so they don't make a betraying sound, using hand signals when possible so not even a whisper will carry to the audience.
And even if I'm not on shift crew, I'm often having to get up to the grid or sneak out into the house in order to fix something, check on something, or otherwise do the work I need to do without making enough of a noise or light to distract the audience from the play.
It is more than just walking slowly, on tip-toe, balancing on every step. It is more than moving deliberately and thoughtfully, thinking out every move. It is being invisible. Being cellophane. Fading into the background.
Even under worklights, with no audience there, the same habits appear. Theater working spaces are often cluttered, cramped, and fragile; you have to move with the same calm deliberation of a rock climber to prevent injury or scenic disaster. And they are ill-lit, too. So even during the height of the work day, on a Sunday afternoon, you are still creeping with cat-feet by the light of a flashlight, feeling your way along with your toes and trying to avoid the festoons of electrical cables, aircraft cable rigging, trick lines and so forth.
And it seems a truism that being a stage electrician -- whether you are doing sound, lighting, or rigging motors to move scenery -- will inevitably require you to wriggle under a platform with barely enough room to draw breath. As well as to, of course, scale the building and hang head-down from the catwalks high above the stage.
Working theater means working the evening shift. I come home tired and dusty and starving, but I have to putter around cleaning up and eating and trying to unwind from a gig with that same ninja care. I can't just drop my tools on the floor, clomp into the bathroom, chop up some veggies for a late-night dinner, then turn on the telly. I'd have the entire apartment complex banging on my door to quiet down.
So from the moment my car turns into one of the quiet streets of the residential neighborhood, it is back to stealth mode; walking softly, talking in a murmur, taking care to put things down softly instead of dropping them.
When I listen to music or news it is on headphones. If I cook, it is done with the same deliberation -- no clattering of cutlery allowed. Little things like laundry or vacuuming are of course dead out. All of that must wait until daylight -- those few short hours of day I have before it is back into a darkened building again, not to emerge (or break for food!) until after midnight.
I like the night. I wouldn't go as far as Sky Masterson and say it is "My Time of Day," but I am comfortable in it. But I also like the day, and I see so very little of it.
The ninja aspect of invisibility appears one more time, in a more abstract way. And that is a principle of engineering. The problem becomes quite obvious with sound design. There are only two kinds of mention of sound in a review. There is mention of problems with the sound. And there is no mention at all. No news is good news for us. The best review is the one that talks about how lovely the singing voices were and how good the blend of the orchestra was and never, ever thinks that none of that happened by accident; that someone like me was up there with flying fingers, making artistic choices and quick improvisations and desperate gambles in order to provide that seamless illusion of "merely" listening to the show.
Like the ninja, good work is invisible work. The job is done and no-one saw you.
This is an aspect of good engineering, I believe. Often, a good engineering solution fits so well that in hindsight it looks obvious. (This goes well beyond the idea that a good bit of engineering does the job and doesn't fail!) Good engineer has elegance. It also, often, has a look of inevitability, and that brings with it the illusion that anyone could have thought of it.
I try for something similar in my lighting and sound design. For me, those design tasks fall in chronological order behind directorial and scenic designs. Most of the task of a lighting design is to make the set look like the set. Again, not attracting attention to itself; just seeming to be a natural part of the on-stage environment. And this is HARD. It is very hard. But if you do it right, it looks like you didn't do anything. The lights are just right.
The same for all those parts of a sound design that aren't sound effects. But even sound effects have this. People who haven't attempted sound design have this illusion I call "The sound of a train." To them, things have a sound. You find a recording of that sound, you play it.
Well, many things do NOT sound anything like what they think they sound like (take, as a Hollywood instance, gunshots or a fist to the face!) Secondly, the sound of a thing is modified by its environment; distance, "liveness," and so forth. A recording of "A cat" will in all probability be completely unbelievable within the context of that particular play, that particular cat, that particular set of speakers and so forth.
It takes a lot of library searching, a lot of tweaking, and a lot of looking deep into your own mind and understanding the difference between the perception of sound and the physics of sound, to find that "cat" that when played is simply accepted by the audience as being, actually, the cat they see.
If you do it wrong, the director on down will complain. "That sounds like a baby!" "That cat sounds like he is dying!" "The cat is too loud!"
If you do it right, however, then no-one comments.
And that's the real pain of this. From the point of view of director, as with the reviewer, and as with the people who will be eventually cutting your check, the "cat" you worked so hard to find seems, in hindsight, so obvious that any one of them could have picked it out.
In fact, you will often find that several shows down the road, this feeling that it is as obvious as it looks in hindsight has infected the management, and instead of hiring you they turn to their nephew with an iTunes account. Because of course anyone can download "The sound of a cat."
I'm staring that in the face with a company I've been with for four years -- someone has volunteered to design their next show for free. Which is well and good and I can't begrudge this cash-strapped company from jumping on the offer. But whatever this person's competence, they inherit systems and choices and a maintenance history I invested several years in. If they walk in the door and do, basically, nothing, what I have established will largely get them through the show intact. And, thus, they can walk away with the kudos, and I risk losing the gig completely.
Invisible work doesn't get you respect (outside of other professionals). And it doesn't get you paid. When all is said and done, I can take wearing black and speaking in a whisper for my entire working life. What I can't stand is being unemployed because I did a good job.
I've spent most of my working life in theater. (I was also spending a LOT of time in theater through high school and college). During an actual show call, that basically means wearing black, hiding in the dark, being careful not to make a noise or shine a light. It means walking softly, carefully, on the balls of your feet (there's a lot to trip on anyhow), putting down tools carefully so they don't make a betraying sound, using hand signals when possible so not even a whisper will carry to the audience.
And even if I'm not on shift crew, I'm often having to get up to the grid or sneak out into the house in order to fix something, check on something, or otherwise do the work I need to do without making enough of a noise or light to distract the audience from the play.
It is more than just walking slowly, on tip-toe, balancing on every step. It is more than moving deliberately and thoughtfully, thinking out every move. It is being invisible. Being cellophane. Fading into the background.
Even under worklights, with no audience there, the same habits appear. Theater working spaces are often cluttered, cramped, and fragile; you have to move with the same calm deliberation of a rock climber to prevent injury or scenic disaster. And they are ill-lit, too. So even during the height of the work day, on a Sunday afternoon, you are still creeping with cat-feet by the light of a flashlight, feeling your way along with your toes and trying to avoid the festoons of electrical cables, aircraft cable rigging, trick lines and so forth.
And it seems a truism that being a stage electrician -- whether you are doing sound, lighting, or rigging motors to move scenery -- will inevitably require you to wriggle under a platform with barely enough room to draw breath. As well as to, of course, scale the building and hang head-down from the catwalks high above the stage.
Working theater means working the evening shift. I come home tired and dusty and starving, but I have to putter around cleaning up and eating and trying to unwind from a gig with that same ninja care. I can't just drop my tools on the floor, clomp into the bathroom, chop up some veggies for a late-night dinner, then turn on the telly. I'd have the entire apartment complex banging on my door to quiet down.
So from the moment my car turns into one of the quiet streets of the residential neighborhood, it is back to stealth mode; walking softly, talking in a murmur, taking care to put things down softly instead of dropping them.
When I listen to music or news it is on headphones. If I cook, it is done with the same deliberation -- no clattering of cutlery allowed. Little things like laundry or vacuuming are of course dead out. All of that must wait until daylight -- those few short hours of day I have before it is back into a darkened building again, not to emerge (or break for food!) until after midnight.
I like the night. I wouldn't go as far as Sky Masterson and say it is "My Time of Day," but I am comfortable in it. But I also like the day, and I see so very little of it.
The ninja aspect of invisibility appears one more time, in a more abstract way. And that is a principle of engineering. The problem becomes quite obvious with sound design. There are only two kinds of mention of sound in a review. There is mention of problems with the sound. And there is no mention at all. No news is good news for us. The best review is the one that talks about how lovely the singing voices were and how good the blend of the orchestra was and never, ever thinks that none of that happened by accident; that someone like me was up there with flying fingers, making artistic choices and quick improvisations and desperate gambles in order to provide that seamless illusion of "merely" listening to the show.
Like the ninja, good work is invisible work. The job is done and no-one saw you.
This is an aspect of good engineering, I believe. Often, a good engineering solution fits so well that in hindsight it looks obvious. (This goes well beyond the idea that a good bit of engineering does the job and doesn't fail!) Good engineer has elegance. It also, often, has a look of inevitability, and that brings with it the illusion that anyone could have thought of it.
I try for something similar in my lighting and sound design. For me, those design tasks fall in chronological order behind directorial and scenic designs. Most of the task of a lighting design is to make the set look like the set. Again, not attracting attention to itself; just seeming to be a natural part of the on-stage environment. And this is HARD. It is very hard. But if you do it right, it looks like you didn't do anything. The lights are just right.
The same for all those parts of a sound design that aren't sound effects. But even sound effects have this. People who haven't attempted sound design have this illusion I call "The sound of a train." To them, things have a sound. You find a recording of that sound, you play it.
Well, many things do NOT sound anything like what they think they sound like (take, as a Hollywood instance, gunshots or a fist to the face!) Secondly, the sound of a thing is modified by its environment; distance, "liveness," and so forth. A recording of "A cat" will in all probability be completely unbelievable within the context of that particular play, that particular cat, that particular set of speakers and so forth.
It takes a lot of library searching, a lot of tweaking, and a lot of looking deep into your own mind and understanding the difference between the perception of sound and the physics of sound, to find that "cat" that when played is simply accepted by the audience as being, actually, the cat they see.
If you do it wrong, the director on down will complain. "That sounds like a baby!" "That cat sounds like he is dying!" "The cat is too loud!"
If you do it right, however, then no-one comments.
And that's the real pain of this. From the point of view of director, as with the reviewer, and as with the people who will be eventually cutting your check, the "cat" you worked so hard to find seems, in hindsight, so obvious that any one of them could have picked it out.
In fact, you will often find that several shows down the road, this feeling that it is as obvious as it looks in hindsight has infected the management, and instead of hiring you they turn to their nephew with an iTunes account. Because of course anyone can download "The sound of a cat."
I'm staring that in the face with a company I've been with for four years -- someone has volunteered to design their next show for free. Which is well and good and I can't begrudge this cash-strapped company from jumping on the offer. But whatever this person's competence, they inherit systems and choices and a maintenance history I invested several years in. If they walk in the door and do, basically, nothing, what I have established will largely get them through the show intact. And, thus, they can walk away with the kudos, and I risk losing the gig completely.
Invisible work doesn't get you respect (outside of other professionals). And it doesn't get you paid. When all is said and done, I can take wearing black and speaking in a whisper for my entire working life. What I can't stand is being unemployed because I did a good job.
Subscribe to:
Comments (Atom)
