Pages

Friday, September 13, 2013

A New Show, a Diagetic Onion

I'm in tech on a musical in a different building than usual.  Different challenges, different approach.  I thought it might be interesting to both elaborate and contrast.

The last show I did was "The Wiz."  My artistic approach was presentational, even artificial; all of the music (from the live band), the singing, and the sound effects were summed together into a mono mix, which was presented at nearly equal volume to every seat in the house.  There was no attempt at localization or worldizing, or even an effort towards separation.

In terms of effects designs, I made no pretense at diagesis.  Even the sound of the twister was consciously an effect.  Actually, consciously a musical effect; it was a synth patch performed live every night on a keyboard!

The vocal reinforcement was also artificial.  Loud, obvious, with strong compression and plenty of reverb.  There were additional vocals during many numbers, and for this the back-up vocalists were in full view of the audience and singing at close range into stand mics.



The show in tech now is "The Drowsy Chaperone."  Which may be the only completely diagetic musical in the traditional style (I am discounting rock-format shows where all the songs are in-show being performed with the on-stage band).

Certainly, there are a number of musicals where characters are actually singing within the world of the play; "The Sound of Music," for one.  "Singing in the Rain" for another.  What makes "The Drowsy Chaperone" unusual is that every moment, every sound of the play within the play is explicitly meant to  be coming from the on-stage phonograph.

The way I see it, there are two kinds of effects in the show.  The effects within the external show -- the New York apartment of THE MAN IN THE CHAIR -- should be approached with utmost realism.  The effects from within the musical he is listening to/imagining are just that; effects. They are whatever the people who committed the original 1928 Broadway production to shellac chose to include.  Which makes them, in my mind, more presentational than real and localized in the same generalized sonic space as the singers and band.

Were I doing this show in a larger space, I think I would present that same mono mix of all elements, with only THE MAN IN THE CHAIR (and those elements of his world that interrupt the action) as localized (and hence diagetic).




The space I am in is compromised.  It is very small, with a ceiling too low to allow center cluster or front fills.  The actors are close, the band is backstage but loud.  Oddly enough, the vocal reinforcement becomes subtle; it can't really be anything but, because the physical layout of the space only permits a few dB gain before feedback.

The equipment is also challenging.  I have been spoiled at my other theater; the multi-speaker Meyer system is run by a central Galileo processor with cross-over, room equalization, corrective delays.  And then we tweak further at the front end, via the digital console.

On "The Wiz" I had strong EQ notching in the main vocal bus, courtesy of the LS9's available 32-band graphics.  I also tend to do a few other tweaks to seat the vocals properly.  Plus, of course, individual tailoring of the sound of each microphone via the dynamics processing and parametric equalization available on every single channel.

I'm doing "Drowsy" on an old (non-digital) Soundcraft.  Plugged pretty much directly into a pair of JBL Eons for FOH.  For mics I have a mere handful of Sennheiser G2's, plus some chorus mics hung from the proscenium.  (As is usual for the latter, I only get useful material from them when someone is standing no more than six feet away).

And this is where achieving simplicity becomes complex.  Because the goal is simple; gentle reinforcement of the small number of body mics and some judicious area micing.  The supra-diagetic nature of the show within the show means placement or localization is unimportant.  Just push a little sound out, as seamlessly as possible.  If the amplification is obvious, this isn't a problem; we don't really know what technology the recording engineers brought to play in 1928 and we can explain quite a lot as due to vagaries of the recording process.

But.

To get simple clean sound means the right mic and the right speakers for the space.  And if you can't get those, then processing that -- with all of its compromises -- makes the mics you have and the speakers you have as clean and direct as you can achieve.

Because you are in a real world.  A hall with distinct acoustics of its own, that the speakers interact with.  This is why you have to tune the system to the house.  And you can't do that with a naked sound board and a handful of gaff tape.





There were many and sundry odd things that apparently were tax write-off donations to the theater over many years.  A Rane processor.  Video router.  A second board.  Multiple firewire interfaces.  The first on that list actually has the best long-term potential.  It is unfortunately a Windows-only machine and even if I had the funds to put an emulator on my current laptop I don't have the time to get it all working.

So I turned to the last.  Last night was the big experiment.  And it worked well enough I think I will run with that for the run of the show.

Reaper.

As it turns out, the MOTU firewire interface I found lying in the basement has some nice onboard DSP.  I'm using it for a low-end roll-off and a bit of compression, because it was hitting the inputs of my computer too hard.

And then I'm taking that signal in, and running the DAW in real-time.  Mostly for the graphic, where I notched out almost excessively to wrench enough gain to make the proscenium mics worth using.  Over the next few days I'm going to experiment some more and see if I can't split out a second bus for the wireless mic send.

The other tasks for the DAW are a small amount of delay, some limiting, better overall EQ tailoring, and perhaps a little reverb (at least on the wireless mics -- they sound more like they are in the space when they aren't completely dry).




On TOP of this scary bit of rig is QLab, feeding out the same firewire interface.  Because the aesthetic of realistic sound within the world of the framing story -- the New York apartment -- requires multiple effects speakers.

There's a speaker hidden inside the cabinet with the phonograph.  Actually, I tried the phonograph itself as a sound source and it was wonderful sounding, warm and real.  But, alas, the old tube amp is aging and the capacitors are shot; after being run for twenty minutes it started to hiss and crackle in a way that would be unacceptable for the production.

On the other side of the stage, a cheap Radio Shack iPod speaker is hiding near the answering machine.  This is both localization and wordizing; the speaker is small and tinny like the answering machine it is simulating, and it is in the same environment where it bounces off the nearby hard surfaces in a sonically distinctive way.  Our ears are very, very good at hearing these nuances, and it adds immeasurably to the realism of a sound effect.

The phone is an actual dial phone.  I had thought I was done with this forever, but just before I threw away my old Bell Labs generator this show comes along.  (As it turns out, one of the other designers owns a Tele-Q, which is a very nicely engineered built-for-theater ring box).  At the moment, I am ringing the phone with a button, and it is wired through the scavenged remains of some of the bad XLR I had to pull down during load-in.

My intent -- in the next few days I may have another report -- is to replace button with relay, stick electronics in between that will take a MIDI event as a trigger, and run a, yes, physical phone from QLab along with the rest of the sound cues.

No comments:

Post a Comment