For a number of shows now I've been hearing a muffled, unfocused, inconsistent, and un-dynamic sound from our pit orchestras. And I do not know how to proceed to make things better.
The first reason we are having trouble is that we don't have the resources for the material. Broadway shows are written for a full pit, full stop. Smaller theaters usually try to fake it with multiple keyboards, each playing multiple split and layered patches, but this is problematic. More on why in a moment.
The clearest (and most consistent) sound comes from recasting the score; adapting it to a smaller pit. The jazz combo setup is a well-tested ensemble most musicians can fit into with minimal rehearsal time together, so that works. A tight fusion ensemble or something more unusual -- like a brass band -- takes more work to bring together, but when this happens the sound is powerful and unique and interesting.
But it takes time, and time is also in short supply It is too much to even expect all of the people in even a small pit to actually perform every show; through the run, subs will be swapping in and out. Few music directors can afford the time to rewrite an entire score. Nor can small theaters afford to reimburse them for that time.
Add to this limitation of resources, the lack of rehearsal time. Without any assistants, the solo sound designer/mixer/engineer is already over tasked with mixing microphones, repairing microphones, creating and adjusting sound effects, and so forth. And similar goes for the run; they can't spare the fingers or the ears to mix the band as well. So the few tiny chances the designer actually gets to hear the band, there is too much else going on to really seat them. This is even assuming the bad shows up ready to play, and there aren't any major changes. And fat chance of either of those being true.
Not that it would do much good, though. The two-synths-plus-random-add-ons schema is too wildly inconsistent, from number to number and from night to night. Maybe, maybe, if you were doing nothing else but, you could figure out all the fader moves to try to get a good blend out of the orchestra from moment to moment, and either memorize them or program them into scenes. Because a musical isn't like a set. It is more like a battle of the bands, in which different numbers in the show have significantly different sounds...and needs.
Take just the drums. In one number, he is called out to rock out on sticks. On another, he is called to soft-shoe the stir-and-slap with brushes. No single mic and mixer settings will suit both.
But even saying this, I don't think it is possible. I'm listening to the keyboard patches, and the splits and layers aren't blended with each other. Which means there is essentially nothing I can do from the console. If the strings layer is overpowering the piano layer on a single keyboard, it is going to be that way no matter what tricks I pull on the send.
As a sound designer, I know that there is no such thing as a correct blend of frequencies (and activity, and phase). It depends on the playback. Each system, each situation, and each playback volume will emphasize certain bands and certain kinds of sound (impulsive versus continuous, for instance). Which means the only way to establish the proper blend between multiple layers and splits on multiple simultaneous keyboards is to adjust those patches from the house, at performance volume. And there isn't anyone who can do that.
Why you end up with these insane layers and splits and multiple flying patch changes are several. First, you don't have the time to take apart a score that was designed for the voicing of a clarinet in one place, a tuba in another. Second, the score has too many parts that are melodically or harmonically necessary and if you leave them out, you don't get the tunes you paid for. Lastly, there is a need to fill sonic space. A single trumpet is just too lonely by itself (especially since no part in the original score actually plays through; so any single-voice approach is going to have huge gaps of tacet). So you need all this junk just to try to fill in the picture and make it look a little less like a Lichtenstein blow-up of just the chin and eye.
I've got a book on orchestration that notes (particularly when writing for less experienced performers) that you need three instruments on a part, never two; two will never quite be on the same pitch or moment, but with three, the two that are closer to each other on any one note mostly mask the third, resulting in a blend that sounds more in tune than any of the individual instruments. Same book also recommends playing all the harmonies on piano, so if an inexperienced player drops out for a bar you've still got their notes covered.
Which is a roundabout way of leading into a central conundrum; especially with the lack of rehearsal, the poor working conditions (crowded, dark pits with poor sight lines and compromised monitors), et al, instrumental lines are often too clumsy to leave naked. But...when you mask the many tiny errors, and fill out the sonic picture, by adding the rest of the band, you also get a mushier sound.
Backline leakage is a compounding part of this problem. In the small theater, the direct acoustic leakage of louder instruments like drums and trumpet, and the personal amps of guitarists, bassists and keyboard players, all leak out into the audience space along with the pit and stage monitors.
To fight through this sonic mush, and to put the instruments back into proportion, you need to amplify. The threshold is whichever misbehaving instrument (usually the drums, but often the lead keyboard) is so loud you can't hear the other instruments over it. You have to amplify EVERYTHING until you've overpowered this unbalanced blend with one that actually works, and features all the instruments equally.
But as the backline level rises -- or, rather, the unbalanced and unlistenable backline leakage -- the amplified sound takes more and more precedence in the final mix.
Why is this a problem? Well, besides the overall volume wars, and the diminishing returns, and the way that small noises like pit chatter or other problems get also amplified excessively, this amplified sound strips out all the natural resonance of acoustic instruments in a space. You are left with only what the microphones hear -- microphones which are close-mic'd (because the feedback threshold and the leakage of other pit instruments demand it) and which are in compromised positions (because you don't have line of sight to them and are helpless when the musicians kick them over, because there is no time in the process for proper sound check and you have to proceed on guesswork, and because the performance varies too much to allow any single position to be less than a compromise).
So you've replaced the natural sound of the pit with a poorly-done amplified version. And you add the thin, dead sounds of sampled instruments on top of that, and the result is trash.
This is happening to the pit themselves, of course. They also end up in volume wars, demanding more and more material in their monitors until they can't hear themselves, then demanding more of themselves, in an ongoing spiral of destruction. They can't hear themselves, they can't hear each other, and they can't blend as a section. And the only thing they can think of is to turn up everything even more; play louder drums, turn up their cabs, and demand more monitor. Which is even more backline, of course, and leads to an even more amplified sound for the poor audience (but now produced by people who might as well be playing in different rooms).
Late one night after finishing a two-show day I went downtown to deposit a check. There was a jazz combo playing in a club across the street. Drums, guitar, upright bass, trumpet. Which blended with each other, who obviously could hear each other, who sounded great even filtered and indirect from a building on the other side of the street -- and who required no (or at least minimal) electronics to get there.
So it can be done with music. The question is how to achieve that in a pit. How can we get a pit that can hear each other and create a blend internally? Because there really is no way -- not within the current process and the current schedule -- for the FOH microphone mixer to also be trying to correct for the band's errors from moment to moment.
No comments:
Post a Comment