It has been a while since I've posted anything about music.
Really, I haven't been doing much music of late. But looking back, the two dance sequences I wrote for a show last season have some interesting lessons. The first, particularly; the "Dance of the Machine."
Normally, as a sound designer for musical theater you are not responsible for music. There will be a Music Director -- in smaller theaters, the same person will organize the music, write out parts if necessary, rehearse the cast, hire the orchestra, and lead from the pit. Often they are a pianist as well and will be the rehearsal pianist for music rehearsals and the lead keyboard in the pit.
You may sometimes help out a little, including such things as setting up and programming synthesizers. But generally the music comes from the score and goes through the Music Director and it isn't your job to break out parts or do vocal coaching or play piano.
The main exception is for productions using pre-recorded music. I've done quite a bit of chopping and looping and even some re-synthesis in MIDI for those. Can be an interesting challenge setting up a vamp in QLab (a fermata is much easier!)
The other exception is music-like material that may be in the sound effects you design. For instance, a designer friend of mine chose to make the doorbell sound (for the home of the very fey director) in The Producers chime, not some Westminster thing, but the first bar of "I Feel Pretty" from West Side Story.
And then there are those unusual circumstances. The context to the show I'm going to talk about; we had added a great deal of incidental music already. All of this was expanded from musical material in the show. But these extended dance breaks were not in the original score. The Music Director had to write these out -- fortunately, he had a good pit and charts were sufficient -- but this was more work than he had been prepared to face and eventually he put his foot down.
Leaving two key dances without written accompianment.
At which point they turned to me, the sound designer, and asked for rhythmic sound poems to cover those gaps.
As it evolved, by the time we opened the band had stepped in to largely cover one (with what I felt was not entirely satisfying results) but they also lent just a little additional material to the first dance, which I felt blended wonderfully with the sound effect to make a hybrid that totally suited the feel of the show.
In any case.
My impression is that most classical dance music was written in a way not dissimilar to the old "Marvel Method" of comic books. The director/choreographer would have the idea, they would rough out what had to happen and how long it would be, the composer would then come up with the music, then the actual choreography would be done to that. With a fair amount of adjustment during the development and rehearsal process.
We actually did do something like this but on a compressed time scale that left both of us (me, and the choreographer) feeling very stressed.
Since I only had a week to work, I asked for desired tempo and length. And since the dance was already partly developed, I asked for how long each section was, and what the division of beats was. I got several different answers -- in part because the director's vision was not the choreographer's vision (they were talking past each other a bit) and the cast was too under-rehearsed in those dances to actually adhere to either.
In any case, there was just enough time for me to be able to see what they had before I sat down to write. So I watched. And taped the whole thing (on my aging digital camera).
At which point I then had three pieces of information; the director's notes, the choreographer's notes, and the visual record of what the dancers currently thought they were doing. None of them matched. But, also, none of them entirely made sense.
So I created on paper a hybrid of the two sets of verbal notes that I thought would work. This sounds simpler than it was; it took ten hours and multiple pieces of paper, going over and over the notes and viewing the video and trying to break down what I saw and what I thought was supposed to be into sensible divisions.
My tentative breakdown after all of that was three sets of eight for a line of dancers that passed three items down the line like a machine, another set of eight over the transition into a different feel, (with a big gesture happening in the middle), and a set of fast threes for the last sequence to be vamped until the final flourish.
On paper, it looked okay. I sent a copy of my notes to the choreographer but got no reply. So I pressed on. Took the video of the actual dance and brought it into Cubase. Jiggled the tempo track until I got a rough line-up with the general pace of the recorded moves. Then edited the video track to take out the pauses and false starts and make it line up with the tempo track, with the hit points on bar lines.
Then started putting in sounds.
At this moment in the development of the sound of the show, I was using mostly sampled sounds. I wanted machinery with an organic feeling to it. So I used a lot of stuff like car doors being opened and bits of farm machinery and sloshing water and stuff that basically had a lot of complexity and life and was generally "softer" in feel -- less hard-edged, less distinct clanks and clinks.
About half the sounds I plugged into a software sampler so I could play them from a keyboard (basically, performing the sequence into Cubase). Others were inserted as individual audio events and tweaked into the right place. For the general feel of "machinery in motion" I was combining what had been originally snippets of one-shot sounds in ways so they blended into a loop that felt like a cyclic motion of machinery. A lot of this was just playing repetitive patterns on several keys until they gelled into something that sounded like a single motion.
Since I wasn't sure how much time the choreographer would have with the music, or how much time I'd have to edit, I was extremely conservative with meter and with the beat. Almost all of it was strongly accented on the quarter-notes, and most of it had clean eighth or sixteenth-note riffs playing as a constant clock tick. I also put in pick-up sounds before every major change.
(Pick-ups are when something changes in the music in the bar just before a vamp ends. It may be just a single extra bell in the percussion, but it tells the singer or dancer where they are if for some reason they lost count).
I also, after some back-and-forth (especially once I got the first sketches into dance rehearsal), cut it into three independent cues, with each going into a vamp that would be faded out under the new cue. This meant if the dancers got much, much faster setting up for the third part (which they did!) we could arrive where we were supposed to be musically just by hitting a button.
Then sitzprobe hit and I had another problem. The feel of the show was very 70's, with a lot of funk and a lot of synthesizer keyboard and processed guitar and bass sounds. And the look of the set, costumes, and props was also colorful and artificial. And my samples just didn't work right in that world.
I had to re-do the dance music for more of a synth feel. And I had no time left.
So, mostly I pulled out the clicking gear sounds that had been on the eights and re-patched the exact same notes to drums instead. Specifically, to 808 electronic drums from my software synth collections. I added electronic bass -- mostly from one freeware software synth I've used and loved before (The Hornet). And I ran the remaining samples through EQ with resonance, and tricks like ring modulator, to make them sound like synthesizers instead of samples.
In retrospect, I wish I had taken the step of going into actual pitch; into harmonic, even melodic development. The sequence still sounded "strange" being just unpitched (or, rather, being musically treated as unpitched rhythmic events). The rhythm, too, was just too simple. I wish I had dared to break it up more with stronger internal divisions and even some poly-rhythmic elements.
And, as I said, the second dance sequence got essentially covered by the orchestra anyhow. But the first one worked. It was okay.
Tricks of the trade, discussion of design principles, and musings and rants about theater from a working theater technician/designer.
Sunday, September 30, 2012
Thursday, September 27, 2012
You know how to whistle, don't you?
...you go online, do a Google search, a Google Shop search, try Musician's Friend and Lark in the Morning and even Amazon, and find nothing but the same damned fake-ass pot-metal shiny-sniny "Hey, I'm an Admiral!" display crap instead of an actual, playable bosun's pipe (or boatswain's whistle, if you prefer). Hell, half of them are 3" keychain models, even!
It's the old tragedy of the commons for the online world -- do a search for biographies of a 19th-century physicist and all you hit are fan page after fan page for some anime girl who was given the same name -- combined with the tragedy of the cheap instrument; for every decent ukulele out there, there are ten gift-shop plywood junk fakes. And at least four sobbing would-be uke players who blame themselves instead of the unplayable instrument.
The best options I have right now for a bosun's pipe which can play properly is vintage ones on eBay (pricey), and the one actual US Navy standard I've been able to find after two hours of searching. I'm a lot more tempted to put down thirty, forty bucks on one I'm sure will play (even if it is not exactly the right look), rather than deal with our actors trying to navigate an unplayable bit of gift-shop junk and finally giving up and having it be replaced with a sound effect.
Yes, actors. So if you know your Broadway musicals, seeing the bosun's pipe will most likely tell you which it is (it is one of the Holy thirteen.)
And here's an exercise for you. Which Broadway musical is coming up next if you see the following props on the rehearsal table?
1) A cow's head and a trumpet. (Add a bra that lights up if you really want to give the game away).
2) A saddle, and a kaleidoscope with a hidden knife blade. (The saddle might be arriving later -- those are big, and usually borrowed).
Think of any other obvious ones?
It's the old tragedy of the commons for the online world -- do a search for biographies of a 19th-century physicist and all you hit are fan page after fan page for some anime girl who was given the same name -- combined with the tragedy of the cheap instrument; for every decent ukulele out there, there are ten gift-shop plywood junk fakes. And at least four sobbing would-be uke players who blame themselves instead of the unplayable instrument.
The best options I have right now for a bosun's pipe which can play properly is vintage ones on eBay (pricey), and the one actual US Navy standard I've been able to find after two hours of searching. I'm a lot more tempted to put down thirty, forty bucks on one I'm sure will play (even if it is not exactly the right look), rather than deal with our actors trying to navigate an unplayable bit of gift-shop junk and finally giving up and having it be replaced with a sound effect.
Yes, actors. So if you know your Broadway musicals, seeing the bosun's pipe will most likely tell you which it is (it is one of the Holy thirteen.)
And here's an exercise for you. Which Broadway musical is coming up next if you see the following props on the rehearsal table?
1) A cow's head and a trumpet. (Add a bra that lights up if you really want to give the game away).
2) A saddle, and a kaleidoscope with a hidden knife blade. (The saddle might be arriving later -- those are big, and usually borrowed).
Think of any other obvious ones?
Monday, September 24, 2012
I Sold My Synths Today, Oh Boy...
(To tune of "I read the news today...")
Actually, I didn't sell them. I put them up on eBay, and I'm betting I'll have to relist at least once, and at the end of it I'll still own one or two of them.
I had a pretty good rack going at one point. All Roland rack-mount synths, topped by a Roland sampler keyboard. Plus one Korg piano module (the venerable P3). It was a bit of a maze of cables, what with MIDI daisy-chains and all the audio connections to my Mackie mixer and a couple extra insert reverbs. Even more mess when you add the Octopad and pedals.
It took a bit of time in OMS and other applications, too, getting all the patch names entered and the routing organized so everything showed up correctly. And adjusting the internal reverb and chorus setting during a song remained a bit of a pain, since that was all System Exclusive Message stuff.
But, basically, once the rack was set up, I had most of the instruments I needed right at my fingertips.
This is the advantage, still, of hardware synths; that you can audition patches in a fraction of the time. No waiting for stuff to stream off the hard disk into RAM.
And there was a subtler artistic advantage as well. Back in the rack days, megabytes of patch memory was considered big. This meant, generally, fewer samples spread further. And shorter samples as well, looped instead of left to play out the full development of the tone. So realism was less. But playability was higher.
So compare this to a real instrumentalist. They can put lots of character into their tone and inflection. This is what makes real instruments, well, real. But at the same time a skilled player can modulate her tone to blend with the instruments around her (or to seat correctly in the band/orchestral mix).
Whereas a synth doesn't have this skill. It plays the same kind of note all the time (unless you, the keyboardist/programmer/arranger, alter it). Which can be done, certainly, but takes more time. And often more CPU cycles. So the simpler, pared-down sounds actually made you more productive. The music was a bit less realistic but you could write it much faster -- as fast, really, as you could write for humans.
Oddly enough, my old rack was actually superior in polyphony as well. A couple of reasons. First is that my present computers aren't particularly fast. Another is that patch size -- and the necessary CPU cycles for data handling -- has gone up by magnitudes. A minor point is also that sequencing on MIDI hardware promotes patch-changes. Software synth is almost always done with new instances for each new instrument, even if it will only play a single note.
I did push the polyphony on my rack, of course. One always does. I was doing orchestral stuff on it and I was doing it classically; instead of playing block chords into a "String Orchestra" patch, I'd play basically monophonic lines (and some double-stops) for each part; First Violins in two or more divisi, Second Violins, Violas, 'Cellos, Double Basses. So if a section had two simultaneous notes, I'd have two different instances of a patch that sounded like less than the full section, and each would play monophonically.
And I was using fingered tremolo as well!
(One nice thing about patch switching is if I had some of the strings switch to pizz, I'd actually change patches on that MIDI channel and that dedicated sequencer track. Which added to the realism; you couldn't just have a couple of bursts of pizz in the middle of an arco section, but instead, you had to think about what the real players were doing. Same for, say, the trumpets picking up their mutes.)
A composer friend of mine settled on his own hardware rack (organized around a D50) several years back. He's modified it a little over the years but the huge production advantage to him has been that he knows the sounds and can go to them without a lot of time wasted auditioning patches (or worse yet, editing them!)
My directions were always different. I would hear the sound first in my head, and do what it took to approximate it with the tools I had. I did go back to a small collection of favorite patches over and over, but I would also do things like edit a D-110 handclap sample to make it higher pitched. Or route a flute to one of the special outputs so I could run it through a hardware reverb unit.
Going into software synthesis allowed me to move towards writing for smaller ensembles with much more control over the precise tone. When I do software synth now, most of the outputs are routed through various effects processors as well as contouring EQ and the like.
But it is much slower. And I miss those days of just being able to dial up my usual set of strings (I even saved this as a blank sequence) and start writing.
With the rack dismantled, many of the pieces I wrote in the past only exist now as audio files. It is too much work to try to adapt what I could do with the patches in the rack to what I have available on my (somewhat thinner and very much more eclectic) virtual rack. I have a few of the old ones in a folder on my DropBox right now.
But, really, I took up arranging music largely to get used to mixing, and now I have plenty of opportunities to mix actual live musicians. So I don't do anywhere near as much music, and most of that is for specific designs.
Actually, I didn't sell them. I put them up on eBay, and I'm betting I'll have to relist at least once, and at the end of it I'll still own one or two of them.
I had a pretty good rack going at one point. All Roland rack-mount synths, topped by a Roland sampler keyboard. Plus one Korg piano module (the venerable P3). It was a bit of a maze of cables, what with MIDI daisy-chains and all the audio connections to my Mackie mixer and a couple extra insert reverbs. Even more mess when you add the Octopad and pedals.
It took a bit of time in OMS and other applications, too, getting all the patch names entered and the routing organized so everything showed up correctly. And adjusting the internal reverb and chorus setting during a song remained a bit of a pain, since that was all System Exclusive Message stuff.
But, basically, once the rack was set up, I had most of the instruments I needed right at my fingertips.
This is the advantage, still, of hardware synths; that you can audition patches in a fraction of the time. No waiting for stuff to stream off the hard disk into RAM.
And there was a subtler artistic advantage as well. Back in the rack days, megabytes of patch memory was considered big. This meant, generally, fewer samples spread further. And shorter samples as well, looped instead of left to play out the full development of the tone. So realism was less. But playability was higher.
So compare this to a real instrumentalist. They can put lots of character into their tone and inflection. This is what makes real instruments, well, real. But at the same time a skilled player can modulate her tone to blend with the instruments around her (or to seat correctly in the band/orchestral mix).
Whereas a synth doesn't have this skill. It plays the same kind of note all the time (unless you, the keyboardist/programmer/arranger, alter it). Which can be done, certainly, but takes more time. And often more CPU cycles. So the simpler, pared-down sounds actually made you more productive. The music was a bit less realistic but you could write it much faster -- as fast, really, as you could write for humans.
Oddly enough, my old rack was actually superior in polyphony as well. A couple of reasons. First is that my present computers aren't particularly fast. Another is that patch size -- and the necessary CPU cycles for data handling -- has gone up by magnitudes. A minor point is also that sequencing on MIDI hardware promotes patch-changes. Software synth is almost always done with new instances for each new instrument, even if it will only play a single note.
I did push the polyphony on my rack, of course. One always does. I was doing orchestral stuff on it and I was doing it classically; instead of playing block chords into a "String Orchestra" patch, I'd play basically monophonic lines (and some double-stops) for each part; First Violins in two or more divisi, Second Violins, Violas, 'Cellos, Double Basses. So if a section had two simultaneous notes, I'd have two different instances of a patch that sounded like less than the full section, and each would play monophonically.
And I was using fingered tremolo as well!
(One nice thing about patch switching is if I had some of the strings switch to pizz, I'd actually change patches on that MIDI channel and that dedicated sequencer track. Which added to the realism; you couldn't just have a couple of bursts of pizz in the middle of an arco section, but instead, you had to think about what the real players were doing. Same for, say, the trumpets picking up their mutes.)
A composer friend of mine settled on his own hardware rack (organized around a D50) several years back. He's modified it a little over the years but the huge production advantage to him has been that he knows the sounds and can go to them without a lot of time wasted auditioning patches (or worse yet, editing them!)
My directions were always different. I would hear the sound first in my head, and do what it took to approximate it with the tools I had. I did go back to a small collection of favorite patches over and over, but I would also do things like edit a D-110 handclap sample to make it higher pitched. Or route a flute to one of the special outputs so I could run it through a hardware reverb unit.
Going into software synthesis allowed me to move towards writing for smaller ensembles with much more control over the precise tone. When I do software synth now, most of the outputs are routed through various effects processors as well as contouring EQ and the like.
But it is much slower. And I miss those days of just being able to dial up my usual set of strings (I even saved this as a blank sequence) and start writing.
With the rack dismantled, many of the pieces I wrote in the past only exist now as audio files. It is too much work to try to adapt what I could do with the patches in the rack to what I have available on my (somewhat thinner and very much more eclectic) virtual rack. I have a few of the old ones in a folder on my DropBox right now.
But, really, I took up arranging music largely to get used to mixing, and now I have plenty of opportunities to mix actual live musicians. So I don't do anywhere near as much music, and most of that is for specific designs.
Friday, September 14, 2012
Off the BEAM
My niece really liked the robot. She wanted the robot after the show was closed. Unfortunately the company has decided to hang on to it for a while.
So my first thought was; "Build her one."
But then the next thought, a much better one, was "Build one with her."
Which led to some rather complicated thinking about what that means. The easy trap is to do everything yourself, but let the kid paint it. An alternate route to a similar affect is to buy or make a kit and have the child assemble it according to instructions.
I find that belittling, not challenging. Not fun, either. But, really, the part that fascinates me -- as past entries in this blog should show -- is the developmental process. The intersection between design aesthetics and the impositions of the production process.
So I'm leaning towards a thought like, "Work through the design and production process of a robot with her." Which is to say; let her take an active role in deciding what kind of robot to build, and how it is to be built.
The big question in my mind right now is how much to attempt to teach or otherwise explore the methods of problem-solving themselves. How much to go into project planning, milestones, iterative problem-solving, and so forth.
And last night I worked a lecture on 21st Century Learning and it largely touched on those same questions. To wit, "How does one teach problem-solving?" With an emphasis on play, failure, group dynamics, and other explorative, iterative, creative methods.
To delve into my own primitive (but oft-used and comfortable) tool kit, the leading question is one of what level to bring her into the parameterization.
It has been at the root of my own suite of problem-solving tools that problems are best solved at the appropriate level. Okay; this is getting too theoretical and I need to explain with a concrete example.
Back when we had a lot more renters at our multi-use space, I'd have someone ask me for a stapler. This would be the approximate dialog;
"Do you guys have a stapler we could use?"
"Yes. What do you need it for?"
"To put in staples."
"What are you stapling?"
"We need a stapler to put up our scenery."
"What kind of scenery and where is it getting stapled?"
"It is for our set. If you don't have a stapler we might be able to find one at the offices."
"We have a stapler. I'd just like to know what kind of material, and where it is going."
"It's this stuff," they point at some heavy velor drapes. "They're for our set."
"Okay. You don't want a stapler. First, staples won't hold that weight. Second, we don't allow you to staple stuff to our walls. We have pipes and drape clamps. I'll help you hang those drapes up with those."
So, the problem appeared to be a lack of a stapler, and the client attempted to solve the problem on that basis. The actual problem was hanging the drapes. A stapler was part of a proposed solution. When you moved out of the smaller box of "we need a stapler" to the surrounding box of "this drape needs to go up," there were better solutions in the larger box.
Around that box is another even bigger box containing the "What do we actually need for our set?" which might contain the information that, say, that the drapes aren't actually needed at all!
Associated with this is one of my truisms; "If you can state the problem completely and concisely, you will often find you just stated the solution."
So the container of "Make her a robot" is the one that contains questions about what kind of a robot, what kinds of tools, what kinds of production methods, budget, etc. The container of the container of "Make her a robot" is the one that asks what the purpose of the project is -- the balance between play, pedagogy, and gift. And somewhere in that box and the super-box around it are questions about my role as uncle and whether she or I can spare the time to do something together, with all of our other life demands.
Well.
Assuming I'm going to involve her at the level of the container that contains the parameterization of "Make a robot with her," the first questions to put before her are design goal and selected production method.
Taking the last first, in a rough sense this splits into choosing a kit, or choosing automated production methods, choosing that I do the heavy lifting (as far as fabbing), or -- and here the two goals truly entangle -- choosing a design goal that is appropriate for her to be largely involved in the fabbing.
On the over-arching pedagogical goals, each production method carries the potential for learn-able skills.
I happen to believe strongly -- if I was a teacher in any sort of design or industry I would apply this in my instruction -- that manual skills are foundation skills. That knowing how to solder, how to cut and shape metal, how to sculpt with the hands, how to paint with a brush all give you a mental framework by which the more modern, more automated methods (such as 3d printing) can be better understood. That this sort of gut-level, intimate familiarity with the strength and ductility of metal you get from carving into it with a hand file (and similar properties of other processes) allows you to better intuit what is happening at the cutting head of a CNC turret lathe.
So I am biased towards an excuse to learn how to bend metal, solder, write in C, paint, etc. But I also need to consider that some of these carry a potential for injury. Kids are very focused, and mentally tend to be better at working safe (or so says a previous lecture I attended). But there is a limit to the hand-eye coordination, and worse, there are two protective skills adults generally have learned.
These two skills are how to anticipate and work around your own tunnel vision -- how to predict that when you are getting frustrated trying to yank a just-desoldered component, you are liable to forget the hot soldering iron is right by your arm. And how to make a positive choice towards risk or even towards injury when necessary to avoid a greater injury; how to make this split-second value judgments that allow you to move quickly.
I've been training a young sound person who has not yet picked up the latter. On one show we worked, a sound effect started playing when it shouldn't have. It took him several seconds to find the right button to turn it off. My approach is to hit ALL the buttons. Because it doesn't matter if you de-selected the window, changed the cue name, lost your place, even crashed the program; because all of those are recoverable conditions. If you have several minutes before the next sound cue, do what it takes to stop the problem as quickly as possible...then go back and clean up.
Anyhow.
This being the future, and of course since high tech is always sexy, it also makes too much sense to consider introducing her to skills of programming, 3d modeling and printing, and other fab services; Ponoko for instance. These are skills she can directly apply towards a variety of projects in the future. Not only are various fabbing services becoming increasingly available to everyone, her other uncle already owns an eight-foot ShopBot!
As an aside; one of the enduring strengths of the modern Make movement, and the hacker-dom that came before it, and the Hams and Heathkit crew before that, is a form of domestication and colonization of what can otherwise be an alien world; the technology that surrounds us.
This isn't just high-tech, of course. Even the process of cooking or clothing making can be a dread mystery, leading to a modern person having no more idea how a cookie or a dress is made than a dog understands how food gets into those metal cans.
And of course not being able to do them. So all of these Maker movements are about unpacking the black box of consumer culture and saying "A dress is really just fabric and stitches assembled to a pattern." And giving the confidence -- not always the ability, the important part is the willingness to try -- to do these things oneself.
Which frees you from being a passive consumer. When you can look at an artifact and say "This one is built shoddy -- I know why they cut corners here, but I could build a better one," then you have more ability to pull aside the curtains of advertising claims and marketing hype. And make better choices for yourself.
And you gain the belief that you can understand. That whatever it is, that you can apply your own intellectual tools and your own experiments and experiences and learn more than is passively accessible on the surface. And that you don't have to accept either the products or the answers you are given; given sufficient desire or need, you can make your own.
Deep philosophy for building a robot with a kid, eh?
So if you are thinking about learning about the black boxes of the current world, there are several artifacts of our present technology that are both mysterious and ubiquitous. The LED is a minor example. The microprocessor a major one. More and more radio is around us; WiFi, RFID tags, the cellular network. Sensors are becoming more common as well, from IR proximity to capacitance touch sensors to facial recognition software. There are reams of scannable information just drifting out there, from RSS feeds to time signal to of course the GPS being beamed to most locations with a view of the sky.
These are all excellent black boxes a robot project is a good excuse to unpack a little. There's also the more general conceptions of energy and materials; working on something like a robot gives you a better intuition of cost and density of energy in various forms, for instance. Especially a solar-powered robot, where you can make a direct and intimate connection between sunlight and the amount of movement a robot can generate from the time it spends basking in it.
Of course, as a theater person I also have tremendous interest in traditional crafts skills. Viewed as an art project, a robot could involve sculpting, casting, welding, sheet metal, cutting, vacuuforming, casting, and so forth. The imagination is the only limit...add basic sewing skills and make yourself a Tribble for a housing. Now what can you stick inside a featureless ball of fur that will interact in a meaningful way with the outside world?
And this mention of crafts segues to the BEAM Robotics movement. One of the keystones of BEAM, as well as a basic part of theatrical crafts and Making in general, is the skill of using found objects. Of how to recover what was junk, re-build it, re-purpose it. Engineering, too, is a discipline where you take pre-tested modules and re-apply them for a new task. Engineers don't go around re-inventing wheels -- not unless the wheel isn't optimal enough for the current problem!
The BEAM Robotics movement holds up as desirable qualities the re-use of techno-scrap, minimalist, function-oriented design, and solar power. What I see as the major philosophical divide with Make philosophy is that when the Make aesthetic re-purposes existing technology, it does so to make use of the packaged power of internal electronics. Make likes to hack into things with a fair amount of computational power, figure out what language they speak, and put them to new tasks.
So a Make approach to a robot would tend towards solving things at a software level; of taking large functional blocks like accelerometer chips, plugging them into a CPU that contains internal models (or rather, state variables) representing its environment, and then executes programmed behavior according to those internal states.
A BEAM robot, instead, has no central nervous system at all, and all complex behaviors are emergent from simple rules. It doesn't model the environment or carry state variables. It simply reacts directly, sensor-to-motor, to stimuli.
Of course the question above the question is what we mean by robot. Do we mean to build a remote-controlled device? Or one that is fully autonomous? Or one that is a hybrid?
The "robot" I built for "Willy Wonka" had no autonomy at all. The closest it got to emergent behavior was some randomness in how well it responded to the controls.
The "Square Candies That Look Round" I built for the same show had complete autonomy and were interactive, but they were tightly programmed around a small number of state variables. Random number generation gave them some "life," but only a small chance of true emergent behavior.
I am fascinated by the possibilities of devices that have a true potential, but I also recognize that the behavior that emerges most of the time is getting stuck in a corner. Although this is expected and even sometimes interesting to a cyberneticist, I am not sure the best project to go into with a young person is one in which there is a large probability the first thing the newly-completed robot will do is.....sit there doing nothing.
(Which my candies did over a long period of software writing. Plus they seemed to see ghosts at one point -- turned out the waxed paper hung down just low enough to give false reflections to the IR sensor.)
(There's another interesting bit about the Square Candies That Look Round. Most of the people that came up to them proceeded to engage in primitive experiment to try to determine the parameters of response. Which is to say; they'd wave at them to see if the candies "noticed." Since the state variables were hidden, complex, and ruled by a random number generator in addition to the IR proximity sensor, this proved difficult. I don't think I ever witnessed someone independently working out the behavior rules for the candies for themselves.)
So where does all this leave me?
Pretty much, where I need to talk to her. Does she want to do the project? What kind of time and emotional investment would she be willing to put into it?
Is her interest more towards aesthetics (aka, what does the shell look like), functionality (as in, towards a robot that can be driven or an arm that can pick stuff up), or cybernetics (as in, autonomy, state models, emergent behavior, interactivity).
And for that matter, what is her internal world model? How much does she grasp yet of what electronics are capable of, what intelligence may or may not be in simple devices?
Except I guess I do still have a question for me, before the next time I see her. And that is; is there some kind of trial balloon I can float that is a less-elaborate exploration of one of these processes? Such as, a simple "robot" that is already partially functional but that she can take and do something with?
I am shying away from the idea of "here's the robot, now paint it how you like it." Because that is a use of crafts skills that doesn't engage the underlying "robot" in any meaningful way. You might as well paint a vase. Same for any other cosmetic -- trivial -- customizing.
Unless of course that is what attracted her to the robot I built. The shape, the design. In which case, sculptural methods are the thing we should explore together. So this is more like a prop-building project than a robot-building project. And in that case it might be appropriate to say "You build the box, I'll stick the electronics in it."
On the gripping hand, however, maybe those basics of control and interaction and even emergence are what interest her. In which case, I have to ask if she is ready to be handed a functional chassis of some sort and the tools to work on the software.
I know her siblings -- both a few years older, both extremely comfortable with mathematics and with logical reasoning -- could handle code if dropped into it. I do not know if this child is up to being confronted with raw C in its native habitat (or even tame C in the petting zoo of the Arduino IDE). But something about graphic-oriented beginner languages (like Etoys) bugs me. Probably they are designed towards learning transferable core concepts of programming. But despite the uphill battle of dealing with the syntax of a less abstracted language, I think it takes less time to go directly to a fully functional language.
Maybe....and this is getting ever-further afield from any concept one might think of as "robot," the think to try her on is as basic as a blinky and a laptop with a stripped-down, bare bones blink program on it. And start right there, with the "Hello world" of the embedded computing world; with messing around with variables and simple loops to change the blink.
Hrm. I do have that Arduino-compatible BlinkM lying around....
So my first thought was; "Build her one."
But then the next thought, a much better one, was "Build one with her."
Which led to some rather complicated thinking about what that means. The easy trap is to do everything yourself, but let the kid paint it. An alternate route to a similar affect is to buy or make a kit and have the child assemble it according to instructions.
I find that belittling, not challenging. Not fun, either. But, really, the part that fascinates me -- as past entries in this blog should show -- is the developmental process. The intersection between design aesthetics and the impositions of the production process.
So I'm leaning towards a thought like, "Work through the design and production process of a robot with her." Which is to say; let her take an active role in deciding what kind of robot to build, and how it is to be built.
The big question in my mind right now is how much to attempt to teach or otherwise explore the methods of problem-solving themselves. How much to go into project planning, milestones, iterative problem-solving, and so forth.
And last night I worked a lecture on 21st Century Learning and it largely touched on those same questions. To wit, "How does one teach problem-solving?" With an emphasis on play, failure, group dynamics, and other explorative, iterative, creative methods.
To delve into my own primitive (but oft-used and comfortable) tool kit, the leading question is one of what level to bring her into the parameterization.
It has been at the root of my own suite of problem-solving tools that problems are best solved at the appropriate level. Okay; this is getting too theoretical and I need to explain with a concrete example.
Back when we had a lot more renters at our multi-use space, I'd have someone ask me for a stapler. This would be the approximate dialog;
"Do you guys have a stapler we could use?"
"Yes. What do you need it for?"
"To put in staples."
"What are you stapling?"
"We need a stapler to put up our scenery."
"What kind of scenery and where is it getting stapled?"
"It is for our set. If you don't have a stapler we might be able to find one at the offices."
"We have a stapler. I'd just like to know what kind of material, and where it is going."
"It's this stuff," they point at some heavy velor drapes. "They're for our set."
"Okay. You don't want a stapler. First, staples won't hold that weight. Second, we don't allow you to staple stuff to our walls. We have pipes and drape clamps. I'll help you hang those drapes up with those."
So, the problem appeared to be a lack of a stapler, and the client attempted to solve the problem on that basis. The actual problem was hanging the drapes. A stapler was part of a proposed solution. When you moved out of the smaller box of "we need a stapler" to the surrounding box of "this drape needs to go up," there were better solutions in the larger box.
Around that box is another even bigger box containing the "What do we actually need for our set?" which might contain the information that, say, that the drapes aren't actually needed at all!
Associated with this is one of my truisms; "If you can state the problem completely and concisely, you will often find you just stated the solution."
So the container of "Make her a robot" is the one that contains questions about what kind of a robot, what kinds of tools, what kinds of production methods, budget, etc. The container of the container of "Make her a robot" is the one that asks what the purpose of the project is -- the balance between play, pedagogy, and gift. And somewhere in that box and the super-box around it are questions about my role as uncle and whether she or I can spare the time to do something together, with all of our other life demands.
Well.
Assuming I'm going to involve her at the level of the container that contains the parameterization of "Make a robot with her," the first questions to put before her are design goal and selected production method.
Taking the last first, in a rough sense this splits into choosing a kit, or choosing automated production methods, choosing that I do the heavy lifting (as far as fabbing), or -- and here the two goals truly entangle -- choosing a design goal that is appropriate for her to be largely involved in the fabbing.
On the over-arching pedagogical goals, each production method carries the potential for learn-able skills.
I happen to believe strongly -- if I was a teacher in any sort of design or industry I would apply this in my instruction -- that manual skills are foundation skills. That knowing how to solder, how to cut and shape metal, how to sculpt with the hands, how to paint with a brush all give you a mental framework by which the more modern, more automated methods (such as 3d printing) can be better understood. That this sort of gut-level, intimate familiarity with the strength and ductility of metal you get from carving into it with a hand file (and similar properties of other processes) allows you to better intuit what is happening at the cutting head of a CNC turret lathe.
So I am biased towards an excuse to learn how to bend metal, solder, write in C, paint, etc. But I also need to consider that some of these carry a potential for injury. Kids are very focused, and mentally tend to be better at working safe (or so says a previous lecture I attended). But there is a limit to the hand-eye coordination, and worse, there are two protective skills adults generally have learned.
These two skills are how to anticipate and work around your own tunnel vision -- how to predict that when you are getting frustrated trying to yank a just-desoldered component, you are liable to forget the hot soldering iron is right by your arm. And how to make a positive choice towards risk or even towards injury when necessary to avoid a greater injury; how to make this split-second value judgments that allow you to move quickly.
I've been training a young sound person who has not yet picked up the latter. On one show we worked, a sound effect started playing when it shouldn't have. It took him several seconds to find the right button to turn it off. My approach is to hit ALL the buttons. Because it doesn't matter if you de-selected the window, changed the cue name, lost your place, even crashed the program; because all of those are recoverable conditions. If you have several minutes before the next sound cue, do what it takes to stop the problem as quickly as possible...then go back and clean up.
Anyhow.
This being the future, and of course since high tech is always sexy, it also makes too much sense to consider introducing her to skills of programming, 3d modeling and printing, and other fab services; Ponoko for instance. These are skills she can directly apply towards a variety of projects in the future. Not only are various fabbing services becoming increasingly available to everyone, her other uncle already owns an eight-foot ShopBot!
As an aside; one of the enduring strengths of the modern Make movement, and the hacker-dom that came before it, and the Hams and Heathkit crew before that, is a form of domestication and colonization of what can otherwise be an alien world; the technology that surrounds us.
This isn't just high-tech, of course. Even the process of cooking or clothing making can be a dread mystery, leading to a modern person having no more idea how a cookie or a dress is made than a dog understands how food gets into those metal cans.
And of course not being able to do them. So all of these Maker movements are about unpacking the black box of consumer culture and saying "A dress is really just fabric and stitches assembled to a pattern." And giving the confidence -- not always the ability, the important part is the willingness to try -- to do these things oneself.
Which frees you from being a passive consumer. When you can look at an artifact and say "This one is built shoddy -- I know why they cut corners here, but I could build a better one," then you have more ability to pull aside the curtains of advertising claims and marketing hype. And make better choices for yourself.
And you gain the belief that you can understand. That whatever it is, that you can apply your own intellectual tools and your own experiments and experiences and learn more than is passively accessible on the surface. And that you don't have to accept either the products or the answers you are given; given sufficient desire or need, you can make your own.
Deep philosophy for building a robot with a kid, eh?
So if you are thinking about learning about the black boxes of the current world, there are several artifacts of our present technology that are both mysterious and ubiquitous. The LED is a minor example. The microprocessor a major one. More and more radio is around us; WiFi, RFID tags, the cellular network. Sensors are becoming more common as well, from IR proximity to capacitance touch sensors to facial recognition software. There are reams of scannable information just drifting out there, from RSS feeds to time signal to of course the GPS being beamed to most locations with a view of the sky.
These are all excellent black boxes a robot project is a good excuse to unpack a little. There's also the more general conceptions of energy and materials; working on something like a robot gives you a better intuition of cost and density of energy in various forms, for instance. Especially a solar-powered robot, where you can make a direct and intimate connection between sunlight and the amount of movement a robot can generate from the time it spends basking in it.
Of course, as a theater person I also have tremendous interest in traditional crafts skills. Viewed as an art project, a robot could involve sculpting, casting, welding, sheet metal, cutting, vacuuforming, casting, and so forth. The imagination is the only limit...add basic sewing skills and make yourself a Tribble for a housing. Now what can you stick inside a featureless ball of fur that will interact in a meaningful way with the outside world?
And this mention of crafts segues to the BEAM Robotics movement. One of the keystones of BEAM, as well as a basic part of theatrical crafts and Making in general, is the skill of using found objects. Of how to recover what was junk, re-build it, re-purpose it. Engineering, too, is a discipline where you take pre-tested modules and re-apply them for a new task. Engineers don't go around re-inventing wheels -- not unless the wheel isn't optimal enough for the current problem!
The BEAM Robotics movement holds up as desirable qualities the re-use of techno-scrap, minimalist, function-oriented design, and solar power. What I see as the major philosophical divide with Make philosophy is that when the Make aesthetic re-purposes existing technology, it does so to make use of the packaged power of internal electronics. Make likes to hack into things with a fair amount of computational power, figure out what language they speak, and put them to new tasks.
So a Make approach to a robot would tend towards solving things at a software level; of taking large functional blocks like accelerometer chips, plugging them into a CPU that contains internal models (or rather, state variables) representing its environment, and then executes programmed behavior according to those internal states.
A BEAM robot, instead, has no central nervous system at all, and all complex behaviors are emergent from simple rules. It doesn't model the environment or carry state variables. It simply reacts directly, sensor-to-motor, to stimuli.
Of course the question above the question is what we mean by robot. Do we mean to build a remote-controlled device? Or one that is fully autonomous? Or one that is a hybrid?
The "robot" I built for "Willy Wonka" had no autonomy at all. The closest it got to emergent behavior was some randomness in how well it responded to the controls.
The "Square Candies That Look Round" I built for the same show had complete autonomy and were interactive, but they were tightly programmed around a small number of state variables. Random number generation gave them some "life," but only a small chance of true emergent behavior.
I am fascinated by the possibilities of devices that have a true potential, but I also recognize that the behavior that emerges most of the time is getting stuck in a corner. Although this is expected and even sometimes interesting to a cyberneticist, I am not sure the best project to go into with a young person is one in which there is a large probability the first thing the newly-completed robot will do is.....sit there doing nothing.
(Which my candies did over a long period of software writing. Plus they seemed to see ghosts at one point -- turned out the waxed paper hung down just low enough to give false reflections to the IR sensor.)
(There's another interesting bit about the Square Candies That Look Round. Most of the people that came up to them proceeded to engage in primitive experiment to try to determine the parameters of response. Which is to say; they'd wave at them to see if the candies "noticed." Since the state variables were hidden, complex, and ruled by a random number generator in addition to the IR proximity sensor, this proved difficult. I don't think I ever witnessed someone independently working out the behavior rules for the candies for themselves.)
So where does all this leave me?
Pretty much, where I need to talk to her. Does she want to do the project? What kind of time and emotional investment would she be willing to put into it?
Is her interest more towards aesthetics (aka, what does the shell look like), functionality (as in, towards a robot that can be driven or an arm that can pick stuff up), or cybernetics (as in, autonomy, state models, emergent behavior, interactivity).
And for that matter, what is her internal world model? How much does she grasp yet of what electronics are capable of, what intelligence may or may not be in simple devices?
Except I guess I do still have a question for me, before the next time I see her. And that is; is there some kind of trial balloon I can float that is a less-elaborate exploration of one of these processes? Such as, a simple "robot" that is already partially functional but that she can take and do something with?
I am shying away from the idea of "here's the robot, now paint it how you like it." Because that is a use of crafts skills that doesn't engage the underlying "robot" in any meaningful way. You might as well paint a vase. Same for any other cosmetic -- trivial -- customizing.
Unless of course that is what attracted her to the robot I built. The shape, the design. In which case, sculptural methods are the thing we should explore together. So this is more like a prop-building project than a robot-building project. And in that case it might be appropriate to say "You build the box, I'll stick the electronics in it."
On the gripping hand, however, maybe those basics of control and interaction and even emergence are what interest her. In which case, I have to ask if she is ready to be handed a functional chassis of some sort and the tools to work on the software.
I know her siblings -- both a few years older, both extremely comfortable with mathematics and with logical reasoning -- could handle code if dropped into it. I do not know if this child is up to being confronted with raw C in its native habitat (or even tame C in the petting zoo of the Arduino IDE). But something about graphic-oriented beginner languages (like Etoys) bugs me. Probably they are designed towards learning transferable core concepts of programming. But despite the uphill battle of dealing with the syntax of a less abstracted language, I think it takes less time to go directly to a fully functional language.
Maybe....and this is getting ever-further afield from any concept one might think of as "robot," the think to try her on is as basic as a blinky and a laptop with a stripped-down, bare bones blink program on it. And start right there, with the "Hello world" of the embedded computing world; with messing around with variables and simple loops to change the blink.
Hrm. I do have that Arduino-compatible BlinkM lying around....
Labels:
arduino,
design,
electronics,
how-to,
process,
programming,
props,
tools
Wednesday, September 12, 2012
Disection of a Sound Design
The following is a detailed breakdown of all the sound design elements in a show from last season. I hope that by touching on all the elements, it will help the young designer understand the context and relative weighting of the different responsibilities of the designer.
At this particular house, I mix my own shows. This creates a small amount of conflict between the different roles of FOH (Front Of House mixer) and Sound Designer, particularly during the height of Tech Week. In most shows I've had someone to "push the button" to run sound effects, but I've rarely had a skilled A2 (Audio Assistant) who I could task with anything more complex.
The building seats 350. It is a converted church, rectangular in shape with a gently sloping seating area and a high raised stage. The installed sound system is a Meyer design (and contractor install), consisting of a pair of mains flanking the stage, a second pair of speakers 3/5 of the way towards the back of the house, subwoofers (currently in the orchestra pit) and a pair of front fills hung high over the front edge of the stage. There is a Galileo processor running all these speakers (corrective EQ, delay, and relative mix) and that in turn is fed from the FOH desk; a Yamaha LS9 digital board.
It is a good system for the space. If all we were doing is ballet and live music, we could just send a stereo pair back to the Galileo and have it locked into a carefully tuned preset. As it is, we are largely doing musical theater -- which places a combination of reinforced and natural voices against a live band.
Two other peculiarities with the kinds of musicals we do. We tend towards a large mixed-age cast, meaning there are always choruses that are not on microphone. And we tend to place the band on stage with the action. Both of these help make getting the vocal reinforcement right, and getting the monitor system right, challenging. And as a result I deprecate ordinary sound effects -- I do them, and I do elaborate ones, but they have to take second priority.
To overview: The responsibility of the Sound Designer is all sound elements of a production. This is usually broken down into;
Vocal Reinforcement (for live musicals, this is mostly wireless microphones).
Orchestral Reinforcement (for some shows, replace this with the cuing of pre-recorded backing tracks).
Performer Monitors (audio and visual links between musicians and singers).
Backstage Monitors and Communications (intercom system, Green Room monitors).
Sound Effects.
In many houses, the Green Room monitor, and the headset system for crew, will already be in the building and you won't have to worry about it. But you can't count on this!
It is also your concern when there are sources of noise on stage -- squeaky platforms, noisy air conditioning, street noise through an open door. You won't often have the authority to do anything about these, however!
SOUND EFFECTS
This particular show under discussion is a (relatively) new musical by the name of "Lucky Duck." I'll discuss sound effects first; they may be lower in priority than getting the vocals right, but they are fun.
"Lucky Duck" joins such shows as "How To Succeed in Business" and "The Will Roger's Follies" by having a pre-recorded performance that is central to the action. In "Lucky Duck" the NARRATOR interacts with the cast. In such cases, getting those VO (voice-over) sessions in the can is utmost priority -- you want them in rehearsal, weeks before the show even goes into Tech, so the actors can work with them.
Fortunately, in this case the NARRATOR was played by a cast member. So he did the voice overs live through rehearsals, and when we felt the performance had matured enough, we recorded them.
So the very first sound effect was a VO recording session. I've talked about these before. You want a space with good sound quality to record in. Not so much quiet, as one that lacks distracting reflections that are difficult to clean from the audio. Extraneous noise is easier to remove than the constant reverb tail of too live a room. And with a strong vocal performer, even a fair amount of leakage of general noise can usually be compensated for. So...look for a room with soft walls. Or a big room. Failing the above, find a cluttered room; the clutter will break up the reflections and make for a softer, more controlled room tone.
Make the actor comfortable and have time to work. Have the director present -- both for their artistic input, and for their skills in getting a good performance out of an actor. Print out the material to be recorded, with large type and generous margins. There are a few other tricks...like green apple slices, which I've never used... but you get the drift.
In this case, what the actor had been doing in rehearsal was interesting, but didn't work as a recording. It was very shouty -- a strong projecting voice for someone addressing a crowd, but the wrong feel for an omniscient story-teller looking over the action and commenting on it. So we worked at getting him to speak softly but with power. The final mic position was about 6", a large-diaphragm condenser suspended just above his forehead looking down.
When we finished the sessions I had almost three complete takes of all the material, plus a lot of false starts. When I went into cutting takes, I ended up making hybrid cues for two of the more critical ones; cutting in phrases from two or more different performances. I did a very small amount of slicing in taking out extra pauses, deleting some breath noises (but not all; I needed that human presence), and of course editing out several plosives. Then hand-normalized, EQ, and gentle compression.
The same actor also read a different pre-recorded vocal, the "Quakerdome" announcement. Which I gave the "Monster Trucks" treatment with delay, heavy EQ, and sub-base synthesizer ("Sunday! Sunday! Sunday!")
For this show I didn't do any Foley. Well; I call it Foley, but technically the process of Foley Walking is the real-time performance of small noises (feet and clothing mostly) to a chunk of film. What I do is mostly recording bit and pieces of me fumbling around with various props...walking, dropping boxes, playing with leaves, etc., which I then process to make effects. In this show there were, as far as I remember, none of these.
The other effects came mostly from library sounds, plus new sounds purchased from Sound Dogs dot com. I wanted a European two-tone siren for a different feel to the urban scenes, for instance.
The sounds that were the most fun were the chop-socky effects. When DRAKE comes to the rescue late in the second act the actor improvised -- hilariously -- a bunch of wacky-looking martial arts moves. To make it even more ridiculous I pulled a bunch of punches and kicks and swooshes and assigned them to a MIDI keyboard -- and did my best to Foley his live performance every night.
Usually my best was not enough, but it worked well enough to be funny. The effects themselves? I ended up going to YouTube, where I found some collections of the proper badly-dubbed 1970's Hong Kong import type hits, chopped them and cleaned them and so on. I also assembled a long swoosh for when he comes in on a rope; this included a swoosh from a YouTube video of a Foley session from "Kung Fu Panda" and a fluttering sound from a small sailboat (courtesy of the BBC sound effects library).
I find sound effects fall (though not always neatly) into three general categories;
Ambiance (sounds of the background environment, which play under the action)
Spot Effects (which happen at a specific moment in the action, such as a gunshot)
Transitional Effects (which happen between scenes)
For this show, there were two or three ambiances. The "Scary Forest," the "New Duck City" urban space (trolley cars, traffic, distant sirens), and the "Sewers" (various drip sounds and some steam and a little water trickle, all with delay effects).
The forest opened up, for me, the Goofy Question. This is a basic problem for any setting that includes anthropomorphic animals (in "Lucky Duck," ducks, wolves, coyotes, and at least one armadillo). Goofy has a dog, a dog named Pluto. Pluto walks on all fours, has a leash, and doesn't talk. So what is Goofy?! Maybe the best general answer is to assume settings like this, like Narnia, have animals and Animals. And that explains why a duck can eat fois gras (shudder!)
I'd previously done "The Jungle Book," and "Seussical," and followed this show with "Honk!" and "Click Clack Moo," and I determined early on that no "animal" in the show would make a non-human sound. I'd mixed "Narnia," earlier -- different designer -- and he also made that choice. In that particular theater, we don't hide the theatrical nature of the performance and whenever possible we have the performers do what needs to be done. And that includes making animal noises.
But that meant I really wasn't free to have howling wolves or lion snarls or anything like that in the Scary Forest. And rustling leaves just wasn't enough. And I couldn't even use birdsong, because most of my cast were birds! So in the end I added some insects and snakes. Because as far as I could tell, neither of those kingdoms were represented by actual cast members in the show.
There were a couple dozen spot effects but I'm only going to mention a few. There was the wolf trap being opened (a snippet from a rusty gate sound). The "Sorry!" buzzer for the talent show (which was a stock buzzer sound re-pitched and shortened). Actually...the buzzer is what made me stick an XBee into a Staples "Easy Button," and by the end of the run it could have been a practical sound effect (as in, the actor would be triggering it from stage). We stayed with making it a traditional, visual cue however. The zap sound for the invisible fence was a combination of two different "zap" sounds, one short and one longer snippet from a Jacob's Ladder effect.
I tend to program ambiances so they start loud to establish, then fade down. Some of these fades were called cues. Some were built into the sound effect. Most were programmed into auto-follow cues in QLab. Once the timing had been tweaked during tech rehearsal, they played themselves automatically.
For this show, one laptop up in the booth ran all the "Called" cues. Actually, the Stage Manager was pressing the button himself so he didn't actually have to "call" them. A second laptop was connected to a MIDI keyboard and from there I ran improvised sounds such as the Kung Fu stuff and some camera and flash camera stuff for the big fashion show.
An alternate arrangement I've done in the past is to have at least four channels routed from the primary sound effects computer; one pair goes to spot effects, the other goes to ambiances. This gives me the ability to mix in the ambiance effects to the total sound picture from the FOH position without messing up the volume of the spot effects. Another trick I've done is have pre-recorded vocals show up on the main vocal bus so they can me mixed and placed together with the live vocals.
In this particular show the sound effects were simple enough, and the way they fell was developed enough (aka I didn't have to wait for set pieces to be delivered or scenes to be blocked before creating them) that they were essentially delivered by early on in Tech Week and I could concentrate on other responsibilities.
Sound effects are exquisitely sensitive to the actual playback environment. What the orchestra is doing, what kinds of noise the scenery is making, the timing of the action on the actual delivered set or with the actual production prop, and of course the specifics of speaker placement and system tuning and the acoustic environment.
I bring the laptop with all the material on it to the building. For this show I had the time to set up my laptop in the middle of the house and playback sounds from there as I adjusted them to the space. The Scary Forest cue took more time than that, because the sounds of the actual scene worked differently than the sounds of the stage in an early afternoon with the lighting people doing a focus. So I had to take it back and adjust it several times.
I was never completely happy with many of the cues, but they worked well enough. And with it loaded on to a laptop, I could move on.
ORCHESTRAL REINFORCEMENT
This was one of the most easy-going pits I'd worked with in this space. Many music directors are, I am convinced, half-deaf and they usually demand stomping monitor levels to be able to hear the vocals over the keyboard amp that they also turn up to ear-splitting levels.
Volume wars in most spaces start with the drums. It takes a good drum player to play softly, and very few of even the good players like to play softly. So the drums are too loud already, and they get into the vocal mics, and they make the audience complain. But that isn't the worst. Since the drums are so loud, the bass player has to turn up his cab in order to hear himself play. And whereas a nice focused DI sound can be tight and musical throughout the room, the almost completely omnidirectional sound of a bass cab turned to eleven saturates and permeates the space with a mushy, undefined low-end rumble that completely destroys any hope of getting an intelligible mix out of it.
And with bass and drums committing murder in his ears, the keyboard player has to crank up his own monitor amps in order that HE can hear himself. And when he is finally comfortable, and the bass player is happy, the drummer suddenly realizes he can hear the other guys again and he doesn't have to lay back in order to figure out where he is in the music. So he gets louder. And the wars go around again.
As hearing fatigue sets in, each player needs more and more volume just to maintain bare audibility. And their monitors -- with an unfocused, bassy, echoing sound -- are washing over the entire audience with sonic mush. And as the poor FOH, the only real choice you have is to crank the reinforcement to scary levels just to put some definition back into the sound.
And that's before you add the vocals!
For this show, fortunately, I had a relaxed keyboard player who could handle being able to hear less than he'd like. And a bass player who was happy enough just to use a monitor we supplied, set up on a chair right in front of his face, and not blast everyone with his own cab. And a drummer...well, one of our drummers was more restrained than the other. But we basically made it through anyhow.
The most important element in the pit is a send from the vocal bus to a monitor -- I use a tiny "Jolly 5a" from FBT set up on a mic stand to get it as close as possible -- that the conductor can use to hear the people on stage. Second after that is a send from the keyboard -- and sometimes bass and kick -- to front fill and side fill monitors for the actors. That way, the actors on stage can hear equal levels of the music they are trying to sing to, regardless of whether they stand next to the band or in the corners of the stage.
My choice for years has been low-profile monitor wedges placed along the front edge of the stage. I use my FBTs a lot because of their extra-wide pattern. I also fill from the wings; generally behind the second set of wings pointed in and slightly upstage.
When the monitors on the stage become so loud they are a major part of what the audience hears, I send monitors a full band mix instead of just keyboards. This way, changing the monitor level doesn't unbalance the orchestra mix.
One issue with orchestral reinforcement in a smaller space like this is you don't have a second mixer to deal with the orchestra. At the FOH you are already dealing with 16-24 channels of wireless microphone, plus a few extras, and trying to actively mix the band is not really an option. Also, in the theater I have been describing, our LS9-32 has no add-on cards or digital snake and those 32 input channels get burned up quick between wireless mics, sound effect playback, and extras like talk-back microphone.
The reality I've found in most shows there is I am pushing to spare six input channels for the pit. And in addition, I make a lot of use of the custom fader layer in order to have fingertip control over mix buses, so those six channels of band have to be thrown into another layer -- which means they aren't at my fingertips during the show.
On several previous shows I've used a second mixer to create a band submix. The designer for "Annie" provided me with a complete 24-channel Brit board with a rack-mount compressor and effects box to go with it. I've used my own much smaller Mackie to the same effect; on one show I added two more mic-level channels to the four already there via an "Audio buddy" pre-amp and used that to create a drum sub-mix.
For the punk-rock band we had in one show I went all out and put mics on kick, snare, hat, cymbals, and low tom, and I submixed those on a laptop computer running CuBase. And every night I was terrified the computer would crash on me in the middle of the show.
These days I'm tending towards kick and overhead. With a four-piece band -- such as for "Lucky Duck" -- it knocks down to keyboard (DI or direct out from keyboard amp), bass, kick and overhead, and wind. This particular show was a multi-wind player. I set up an overhead and a low sax/clarinet mic, but discovered during the run the overhead was the better sound for everything. I flipped that fader to the top layer of the LS9 because I was riding it a lot during the show. The rest of the band, however, I could leave alone.
This means the default in that building is 6 inputs from the band. I run a 12x snake to them, and I dedicate 1-8 for inputs and 9-12 for monitors. 9 is the "Con-Mon"; vocals for the conductor's monitor. 10 is general monitor for the rest of the band with 11 and 12 for anyone that really needs their own mix.
Except that more often than not, I'll stick a mixer down in the pit with them and, by using the AUXs, the conductor can adjust the mixes sent to two different players while also maintaining his own mix of keyboard, vocals, and any band elements he needs to hear more of (usually none!)
The big downside to this method is that when volume wars start, the conductor can crank their own vocal monitor as much as they like...ignorant of the very real feedback limits the mix sent to the house is up against. I've had the conductor's monitor end up so high IT starts feeding back.
Bass often ends up on a mic instead of a DI, not because it sounds better, but because I've had so many bass players in the past with hideous ground loop problems. They also have a habit of unplugging 1.5 seconds after the last note, and I can't always get to the mute in time!
I've really fallen out of love of close-micing snare, which has never quite sounded right even with heavy gating, compression, and corrective EQ, plus a bit of room reverb. The best snare sound I ever got in that house came from a saxophone mic; four feet away from the snare and pointed 180 degrees away from it! For kick, I've been very happy with my $60 Gear One, stuck just slightly inside the vent when they have one. You get more of a papery sound if there is no vent in the front head. I've tried using a short boom stand to get a mic up tight to the beater head, but musical pits are cramped and hard to access and without an A2 you are at the mercy of wherever the mics got kicked to as the band wriggled into the pit four minutes before the downbeat.
Winds and strings speak best with some air. The most natural sound is a couple of feet away. In a cramped pit, though, you get too much bleed (especially drums), so you end up compromising. Also, winds are always multi-instrumentalists in a pit orchestra, so you don't have the luxury of getting a close-mic in exactly the right spot on each different instrument. Heck -- often they are switching fast enough to chip a tooth (real story!) So set up a good condenser around six inches to a foot, depending on how much that particular player jumps around.
And those are the basics. As few mics as possible, set so they don't have to be tightly positioned every night but have some slop in them, and sub-mix them so you aren't staring at those faders in addition to all the wireless.
(Actually, I think I confused some of this with the pit for "Honk," which shared the set. The pit for "Lucky Duck" was a bit louder, and sometimes I had to fight to get the vocals up over them. But they were still a pleasure to work with.)
VOCAL REINFORCEMENT
The theater owns 18 channels of Sennheiser "Evolutions." After that, it is a few units of Sennie G2's, and some Shure SLX we mostly keep around as hand-helds. Lately I've been loaning some of my own seven channels of Sennie to bring us up to 20-22 working channels on a typical show.
"Lucky Duck" actually has relatively few named roles, and even fewer players (there's a fair amount of doubling.) There are only a few chorus numbers. At this company, we almost always include a younger cast; they are treated as a separate chorus and brought in for specific scenes. They are almost never on mic, however, one of them did get a mic for a few lines of dialog in this show.
Unlike many shows, there weren't any major issues in finding places to hide the mic packs, and allow the actor to do the various physical things we often ask of them. The worst case was probably SERENA's Act II dress, the scooping back of which showed off the dangling mic cord to good advantage (there's only so much surgical tape an actress can put up with).
My default mic position is forehead. Because of animal costumes and hats we had to move to cheek for a number of the cast, including essentially all of the male cast. Also, our SERENA had the pipes but to get that intimate pop sound we chose a B6; with the flexible wire boom that allows it to be placed close to the lips, Madonna style.
Because these songs are very mic'd and very pop we were scared of losing a mic on stage. So we gave the talk-back mic to the drummer so he could hand it up in case of a body mic failure! We never actually executed this safety but it was fun knowing we had it.
In many shows I've had microphones set up for an off-stage chorus. Not this one. I've also sometimes had floor mics or spot mics for an un-mic'd chorus. Again, not this one; this was a very simple show wireless-wise.
The Meyer install is very nice, but there is a major flaw when dealing with a live band. And that is taper. Since instruments like drums are acoustic, they are naturally loud for the front rows, and softer for the back rows. If you were to leave the system settings flat, with the rear delays covering the back part of the house, the keyboard would be softer than the drums for people in the front rows, and louder than the drums for people in the back rows.
So I've set up custom programs for a slight taper to the system. Also, I've chosen not to send vocals to the subwoofers. And lastly, I use some of the house system for sound effects playback -- which means I need access to specific speakers outside of the general programming.
The optimal solution in that building turns out to be the non-simple one. There are six inputs to the Galileo speaker processor, which in turn is feeding 8-10 speakers (depending on whether we use the current center cluster, which was re-purposed from where they were in the original install).
So what gets snaked out to the Galileo are six of the omni outputs on the LS9. The Galileo splits them into several groups; right house, left house, sub, etc. In the case of the subs, for instance, the two boxes are being used in an end-fire or semi-cardiod array; one speaker flipped, inverted and delayed so the sound is boosted towards the front but experiences destructive interference to the back and sides, thus lowering the volume there. To get that effect I'm using two Galileo outputs and the internal processing, but I'm feeding it with only one omni from the LS9.
On the LS9 side, the mono bus and the stereo bus are each dialed to a different set of omnis, to give a slightly different sound; the mono bus functions as vocal bus and has no sub in it, for instance. However, to make it possible to send sound directly to the house system bypassing those two busses, mix channels 9-12 are patched in a one-to-one basis to the matrix channels driving the omni outs. That makes it possible for any input channel to behave like an independent mix bus and send to the house system itself.
(Well, actually, since several of the mix channels are taken up with monitor mixes, and several are in use for the internal effects racks, the special output buses are combinations of different useful Galileo inputs. And there is always a compromise between having access to a specific speaker for one specific sound effect, and being able to properly tailor the general house sound as fed from the sound effects computer).
Each speaker is delayed slightly to make the impulses arrive together at the audience. Each is also EQ'd a little to the room. But on top of that, the LS9 output channels are set for an overall delay of about 20 milliseconds. This makes the reinforced vocals arrive slightly after the impulse from the live actor, and via Precedence Effect focuses the apparent sound source on the actor instead of up in the flies.
On the vocal bus is strapped a graphic EQ for notching out the worst peak in the room (otherwise I try to run the bus, and the mics, without mucking up the EQ too much in search of every last db before feedback). I also often put a very slight bit of compression on the entire bus.
During the actual show, the mono bus is swapped onto the custom layer along with a linked fader for the sound effects, and the reverb outputs. With the band mix on the stereo bus, this puts masters for band, vocals, SFX, and reverb under the fingers of my right hand. My left hand is then free to nuance individual microphones.
Because this is a pop-sounding show, there are two reverb buses; a general "stage" algo to seat the voices in the sonic space (basically, to make them sound a little less like they are on mic) and a "plate" algo that gets cranked up for the more pop songs. Since I liked the longer tail on the "stage" for certain songs, I ran that up instead for those. In addition, mix bus 15 had a delay algorithm on it (again, from the LS9's internal DSP); this was applied to the vocal mics during dialog in the sewer scene.
The last effects bus is putting a little general reverb on some of the band. Usually, keyboard patches will send you a signal with reverb already on it. And bass sounds best dry. But I like a little on the drums, and I like a lot on certain winds.
On a different show the "special" effects bus (the one that isn't band, general vocal, or song reverb) was set to a ring modulator for a robot voice. I've also used the detune effect, but Yamaha DSP in that price range is pretty bad. For effects like OZ in "The Wizard of Oz" I use an outboard effects box. Unfortunately the LS9 has no channel inserts -- so this ties up a mix bus, an omni out, and another channel to do.
To follow all the mic ins and outs, we use the scene memory function. Each time there is a major change in which microphones need to be hot, I tap one of the User Assigned buttons, which calls up the next programmed Scene, which runs the motorized faders up. I tend to automate only the faders, using the Recall Safe function to lock out any other changes. This way, if I have a mic go bad during the show I can hit the Mute button and it will stay muted through following scenes.
Each scene number is recorded into a script, which I keep at the FOH and follow along in. After a lot of experimentation I've determined I don't like to try to hit every mic move. Instead I'll hit the big changes, and manually take out faders as needed during the scene. One trick is to program faders that need to be added in the middle of a scene so they come up to -40 or so; just enough so you can see it and know that fader will need to be brought up the rest of the way soon.
I try to leave things alone for much of the show. The big change is bringing up the vocal master about 5 db going into a song, and trimming it back down in following dialog. When there are duets, you often have to ride the faders a bit to get a blend -- especially when people start singing into each other's microphones.
COMMUNICATIONS:
The theater already has a Clear-com system set up; one channel, with the base station in the booth. In the past it sometimes got run through the sound snake but at the moment it is all dedicated wiring. Unfortunately, especially during tech we end up having to run long daisy-chains of headsets (because we have no splitters).
Over the years we've been moving out the old rack boxes and bringing in new belt packs. Also lighter headsets, and the first radio headsets (not very nice ones, though). My preference is not to use a headset at FOH, but just to have one available so I can call backstage and have a mic replaced or batteries changed. So that means that, barring failures during the run (when the usual staff isn't there and it falls to me to fix things) the headsets are not part of my responsibility.
Our current backstage monitor system, in common with several other theaters, is baby monitors. We've also sometimes hung an old dynamic mic in the rafters and run it back to a powered monitor. Lately -- and we did this for "Lucky Duck," I've taken one of the older wireless microphones we don't use and gaff-taped it to the set. They may be old, but the signal will still reach all the way back to the overflow dressing room. With an omni mic, it picks up the stage fairly well. All I need is, again, a powered monitor speaker (I have a pair of JBL Eon's kicking around I usually task with this) to connect to the receiver.
We do not have, at that theater, a backstage paging system, anything resembling a cue light system, or a wired communication from House Management to Stage Management. The place is small enough so the latter is not really a problem!
OTHER:
There really wasn't any "other" on this show. On other shows, in other buildings, I've had to set up the keyboards, even create sampler patches, set up electronic drums, run live microphones (for a production of "Annie" at another building we managed to score three period microphones for the radio scene -- so for that scene only we turned off their wireless mics and used the stand mics.) I even built/repaired cue light systems and headsets.
I'd say I spent under twenty hours creating the sound effects for this show, most of that being the NARRATOR (with twenty lines of dialog, it felt like prepping game audio!) There were no "Showcase" cues, like the B-Movie cue from "Grease," but it wasn't telephones and toilet flushes either; I got to have some fun creating effects.
Because this show was less fully programmed than many, and because I was doing a lot of tweaking of reverb settings, it got very busy during one or two numbers -- I was dialing in effects, riding the lead vocal mic, trimming the band, bringing in the chorus with a recalled scene and improvising camera clicks and flashes at a keyboard, all at once.
We did have a mic fail on a lead. Twice during the run we had to finish an act without a mic on one of the leads. On the other hand, I managed not to catch any back-stage chatter on a live mic. A connector got stepped on in the pit and we lost the monitor connection to the conductor in the middle of a performance. We also almost lost the keyboard but we discovered the bad connection just minutes before the curtain.
Quite a few body mics wore out during the show. We caught less than half of them in sound check; these things always wait until you are in the middle of a scene before they really act up. For most of them I was able to mute the mic and live without it until I could call backstage and get a replacement hung on the actor. By the end of the run, about 1/3 of the elements we'd started with had gone bad and had to be repaired. Which mostly meant a solder job, and usually a new connector.
And there you have it.
At this particular house, I mix my own shows. This creates a small amount of conflict between the different roles of FOH (Front Of House mixer) and Sound Designer, particularly during the height of Tech Week. In most shows I've had someone to "push the button" to run sound effects, but I've rarely had a skilled A2 (Audio Assistant) who I could task with anything more complex.
The building seats 350. It is a converted church, rectangular in shape with a gently sloping seating area and a high raised stage. The installed sound system is a Meyer design (and contractor install), consisting of a pair of mains flanking the stage, a second pair of speakers 3/5 of the way towards the back of the house, subwoofers (currently in the orchestra pit) and a pair of front fills hung high over the front edge of the stage. There is a Galileo processor running all these speakers (corrective EQ, delay, and relative mix) and that in turn is fed from the FOH desk; a Yamaha LS9 digital board.
It is a good system for the space. If all we were doing is ballet and live music, we could just send a stereo pair back to the Galileo and have it locked into a carefully tuned preset. As it is, we are largely doing musical theater -- which places a combination of reinforced and natural voices against a live band.
Two other peculiarities with the kinds of musicals we do. We tend towards a large mixed-age cast, meaning there are always choruses that are not on microphone. And we tend to place the band on stage with the action. Both of these help make getting the vocal reinforcement right, and getting the monitor system right, challenging. And as a result I deprecate ordinary sound effects -- I do them, and I do elaborate ones, but they have to take second priority.
To overview: The responsibility of the Sound Designer is all sound elements of a production. This is usually broken down into;
Vocal Reinforcement (for live musicals, this is mostly wireless microphones).
Orchestral Reinforcement (for some shows, replace this with the cuing of pre-recorded backing tracks).
Performer Monitors (audio and visual links between musicians and singers).
Backstage Monitors and Communications (intercom system, Green Room monitors).
Sound Effects.
In many houses, the Green Room monitor, and the headset system for crew, will already be in the building and you won't have to worry about it. But you can't count on this!
It is also your concern when there are sources of noise on stage -- squeaky platforms, noisy air conditioning, street noise through an open door. You won't often have the authority to do anything about these, however!
SOUND EFFECTS
This particular show under discussion is a (relatively) new musical by the name of "Lucky Duck." I'll discuss sound effects first; they may be lower in priority than getting the vocals right, but they are fun.
"Lucky Duck" joins such shows as "How To Succeed in Business" and "The Will Roger's Follies" by having a pre-recorded performance that is central to the action. In "Lucky Duck" the NARRATOR interacts with the cast. In such cases, getting those VO (voice-over) sessions in the can is utmost priority -- you want them in rehearsal, weeks before the show even goes into Tech, so the actors can work with them.
Fortunately, in this case the NARRATOR was played by a cast member. So he did the voice overs live through rehearsals, and when we felt the performance had matured enough, we recorded them.
So the very first sound effect was a VO recording session. I've talked about these before. You want a space with good sound quality to record in. Not so much quiet, as one that lacks distracting reflections that are difficult to clean from the audio. Extraneous noise is easier to remove than the constant reverb tail of too live a room. And with a strong vocal performer, even a fair amount of leakage of general noise can usually be compensated for. So...look for a room with soft walls. Or a big room. Failing the above, find a cluttered room; the clutter will break up the reflections and make for a softer, more controlled room tone.
Make the actor comfortable and have time to work. Have the director present -- both for their artistic input, and for their skills in getting a good performance out of an actor. Print out the material to be recorded, with large type and generous margins. There are a few other tricks...like green apple slices, which I've never used... but you get the drift.
In this case, what the actor had been doing in rehearsal was interesting, but didn't work as a recording. It was very shouty -- a strong projecting voice for someone addressing a crowd, but the wrong feel for an omniscient story-teller looking over the action and commenting on it. So we worked at getting him to speak softly but with power. The final mic position was about 6", a large-diaphragm condenser suspended just above his forehead looking down.
When we finished the sessions I had almost three complete takes of all the material, plus a lot of false starts. When I went into cutting takes, I ended up making hybrid cues for two of the more critical ones; cutting in phrases from two or more different performances. I did a very small amount of slicing in taking out extra pauses, deleting some breath noises (but not all; I needed that human presence), and of course editing out several plosives. Then hand-normalized, EQ, and gentle compression.
The same actor also read a different pre-recorded vocal, the "Quakerdome" announcement. Which I gave the "Monster Trucks" treatment with delay, heavy EQ, and sub-base synthesizer ("Sunday! Sunday! Sunday!")
For this show I didn't do any Foley. Well; I call it Foley, but technically the process of Foley Walking is the real-time performance of small noises (feet and clothing mostly) to a chunk of film. What I do is mostly recording bit and pieces of me fumbling around with various props...walking, dropping boxes, playing with leaves, etc., which I then process to make effects. In this show there were, as far as I remember, none of these.
The other effects came mostly from library sounds, plus new sounds purchased from Sound Dogs dot com. I wanted a European two-tone siren for a different feel to the urban scenes, for instance.
The sounds that were the most fun were the chop-socky effects. When DRAKE comes to the rescue late in the second act the actor improvised -- hilariously -- a bunch of wacky-looking martial arts moves. To make it even more ridiculous I pulled a bunch of punches and kicks and swooshes and assigned them to a MIDI keyboard -- and did my best to Foley his live performance every night.
Usually my best was not enough, but it worked well enough to be funny. The effects themselves? I ended up going to YouTube, where I found some collections of the proper badly-dubbed 1970's Hong Kong import type hits, chopped them and cleaned them and so on. I also assembled a long swoosh for when he comes in on a rope; this included a swoosh from a YouTube video of a Foley session from "Kung Fu Panda" and a fluttering sound from a small sailboat (courtesy of the BBC sound effects library).
I find sound effects fall (though not always neatly) into three general categories;
Ambiance (sounds of the background environment, which play under the action)
Spot Effects (which happen at a specific moment in the action, such as a gunshot)
Transitional Effects (which happen between scenes)
For this show, there were two or three ambiances. The "Scary Forest," the "New Duck City" urban space (trolley cars, traffic, distant sirens), and the "Sewers" (various drip sounds and some steam and a little water trickle, all with delay effects).
The forest opened up, for me, the Goofy Question. This is a basic problem for any setting that includes anthropomorphic animals (in "Lucky Duck," ducks, wolves, coyotes, and at least one armadillo). Goofy has a dog, a dog named Pluto. Pluto walks on all fours, has a leash, and doesn't talk. So what is Goofy?! Maybe the best general answer is to assume settings like this, like Narnia, have animals and Animals. And that explains why a duck can eat fois gras (shudder!)
I'd previously done "The Jungle Book," and "Seussical," and followed this show with "Honk!" and "Click Clack Moo," and I determined early on that no "animal" in the show would make a non-human sound. I'd mixed "Narnia," earlier -- different designer -- and he also made that choice. In that particular theater, we don't hide the theatrical nature of the performance and whenever possible we have the performers do what needs to be done. And that includes making animal noises.
But that meant I really wasn't free to have howling wolves or lion snarls or anything like that in the Scary Forest. And rustling leaves just wasn't enough. And I couldn't even use birdsong, because most of my cast were birds! So in the end I added some insects and snakes. Because as far as I could tell, neither of those kingdoms were represented by actual cast members in the show.
There were a couple dozen spot effects but I'm only going to mention a few. There was the wolf trap being opened (a snippet from a rusty gate sound). The "Sorry!" buzzer for the talent show (which was a stock buzzer sound re-pitched and shortened). Actually...the buzzer is what made me stick an XBee into a Staples "Easy Button," and by the end of the run it could have been a practical sound effect (as in, the actor would be triggering it from stage). We stayed with making it a traditional, visual cue however. The zap sound for the invisible fence was a combination of two different "zap" sounds, one short and one longer snippet from a Jacob's Ladder effect.
I tend to program ambiances so they start loud to establish, then fade down. Some of these fades were called cues. Some were built into the sound effect. Most were programmed into auto-follow cues in QLab. Once the timing had been tweaked during tech rehearsal, they played themselves automatically.
For this show, one laptop up in the booth ran all the "Called" cues. Actually, the Stage Manager was pressing the button himself so he didn't actually have to "call" them. A second laptop was connected to a MIDI keyboard and from there I ran improvised sounds such as the Kung Fu stuff and some camera and flash camera stuff for the big fashion show.
An alternate arrangement I've done in the past is to have at least four channels routed from the primary sound effects computer; one pair goes to spot effects, the other goes to ambiances. This gives me the ability to mix in the ambiance effects to the total sound picture from the FOH position without messing up the volume of the spot effects. Another trick I've done is have pre-recorded vocals show up on the main vocal bus so they can me mixed and placed together with the live vocals.
In this particular show the sound effects were simple enough, and the way they fell was developed enough (aka I didn't have to wait for set pieces to be delivered or scenes to be blocked before creating them) that they were essentially delivered by early on in Tech Week and I could concentrate on other responsibilities.
Sound effects are exquisitely sensitive to the actual playback environment. What the orchestra is doing, what kinds of noise the scenery is making, the timing of the action on the actual delivered set or with the actual production prop, and of course the specifics of speaker placement and system tuning and the acoustic environment.
I bring the laptop with all the material on it to the building. For this show I had the time to set up my laptop in the middle of the house and playback sounds from there as I adjusted them to the space. The Scary Forest cue took more time than that, because the sounds of the actual scene worked differently than the sounds of the stage in an early afternoon with the lighting people doing a focus. So I had to take it back and adjust it several times.
I was never completely happy with many of the cues, but they worked well enough. And with it loaded on to a laptop, I could move on.
ORCHESTRAL REINFORCEMENT
This was one of the most easy-going pits I'd worked with in this space. Many music directors are, I am convinced, half-deaf and they usually demand stomping monitor levels to be able to hear the vocals over the keyboard amp that they also turn up to ear-splitting levels.
Volume wars in most spaces start with the drums. It takes a good drum player to play softly, and very few of even the good players like to play softly. So the drums are too loud already, and they get into the vocal mics, and they make the audience complain. But that isn't the worst. Since the drums are so loud, the bass player has to turn up his cab in order to hear himself play. And whereas a nice focused DI sound can be tight and musical throughout the room, the almost completely omnidirectional sound of a bass cab turned to eleven saturates and permeates the space with a mushy, undefined low-end rumble that completely destroys any hope of getting an intelligible mix out of it.
And with bass and drums committing murder in his ears, the keyboard player has to crank up his own monitor amps in order that HE can hear himself. And when he is finally comfortable, and the bass player is happy, the drummer suddenly realizes he can hear the other guys again and he doesn't have to lay back in order to figure out where he is in the music. So he gets louder. And the wars go around again.
As hearing fatigue sets in, each player needs more and more volume just to maintain bare audibility. And their monitors -- with an unfocused, bassy, echoing sound -- are washing over the entire audience with sonic mush. And as the poor FOH, the only real choice you have is to crank the reinforcement to scary levels just to put some definition back into the sound.
And that's before you add the vocals!
For this show, fortunately, I had a relaxed keyboard player who could handle being able to hear less than he'd like. And a bass player who was happy enough just to use a monitor we supplied, set up on a chair right in front of his face, and not blast everyone with his own cab. And a drummer...well, one of our drummers was more restrained than the other. But we basically made it through anyhow.
The most important element in the pit is a send from the vocal bus to a monitor -- I use a tiny "Jolly 5a" from FBT set up on a mic stand to get it as close as possible -- that the conductor can use to hear the people on stage. Second after that is a send from the keyboard -- and sometimes bass and kick -- to front fill and side fill monitors for the actors. That way, the actors on stage can hear equal levels of the music they are trying to sing to, regardless of whether they stand next to the band or in the corners of the stage.
My choice for years has been low-profile monitor wedges placed along the front edge of the stage. I use my FBTs a lot because of their extra-wide pattern. I also fill from the wings; generally behind the second set of wings pointed in and slightly upstage.
When the monitors on the stage become so loud they are a major part of what the audience hears, I send monitors a full band mix instead of just keyboards. This way, changing the monitor level doesn't unbalance the orchestra mix.
One issue with orchestral reinforcement in a smaller space like this is you don't have a second mixer to deal with the orchestra. At the FOH you are already dealing with 16-24 channels of wireless microphone, plus a few extras, and trying to actively mix the band is not really an option. Also, in the theater I have been describing, our LS9-32 has no add-on cards or digital snake and those 32 input channels get burned up quick between wireless mics, sound effect playback, and extras like talk-back microphone.
The reality I've found in most shows there is I am pushing to spare six input channels for the pit. And in addition, I make a lot of use of the custom fader layer in order to have fingertip control over mix buses, so those six channels of band have to be thrown into another layer -- which means they aren't at my fingertips during the show.
On several previous shows I've used a second mixer to create a band submix. The designer for "Annie" provided me with a complete 24-channel Brit board with a rack-mount compressor and effects box to go with it. I've used my own much smaller Mackie to the same effect; on one show I added two more mic-level channels to the four already there via an "Audio buddy" pre-amp and used that to create a drum sub-mix.
For the punk-rock band we had in one show I went all out and put mics on kick, snare, hat, cymbals, and low tom, and I submixed those on a laptop computer running CuBase. And every night I was terrified the computer would crash on me in the middle of the show.
These days I'm tending towards kick and overhead. With a four-piece band -- such as for "Lucky Duck" -- it knocks down to keyboard (DI or direct out from keyboard amp), bass, kick and overhead, and wind. This particular show was a multi-wind player. I set up an overhead and a low sax/clarinet mic, but discovered during the run the overhead was the better sound for everything. I flipped that fader to the top layer of the LS9 because I was riding it a lot during the show. The rest of the band, however, I could leave alone.
This means the default in that building is 6 inputs from the band. I run a 12x snake to them, and I dedicate 1-8 for inputs and 9-12 for monitors. 9 is the "Con-Mon"; vocals for the conductor's monitor. 10 is general monitor for the rest of the band with 11 and 12 for anyone that really needs their own mix.
Except that more often than not, I'll stick a mixer down in the pit with them and, by using the AUXs, the conductor can adjust the mixes sent to two different players while also maintaining his own mix of keyboard, vocals, and any band elements he needs to hear more of (usually none!)
The big downside to this method is that when volume wars start, the conductor can crank their own vocal monitor as much as they like...ignorant of the very real feedback limits the mix sent to the house is up against. I've had the conductor's monitor end up so high IT starts feeding back.
Bass often ends up on a mic instead of a DI, not because it sounds better, but because I've had so many bass players in the past with hideous ground loop problems. They also have a habit of unplugging 1.5 seconds after the last note, and I can't always get to the mute in time!
I've really fallen out of love of close-micing snare, which has never quite sounded right even with heavy gating, compression, and corrective EQ, plus a bit of room reverb. The best snare sound I ever got in that house came from a saxophone mic; four feet away from the snare and pointed 180 degrees away from it! For kick, I've been very happy with my $60 Gear One, stuck just slightly inside the vent when they have one. You get more of a papery sound if there is no vent in the front head. I've tried using a short boom stand to get a mic up tight to the beater head, but musical pits are cramped and hard to access and without an A2 you are at the mercy of wherever the mics got kicked to as the band wriggled into the pit four minutes before the downbeat.
Winds and strings speak best with some air. The most natural sound is a couple of feet away. In a cramped pit, though, you get too much bleed (especially drums), so you end up compromising. Also, winds are always multi-instrumentalists in a pit orchestra, so you don't have the luxury of getting a close-mic in exactly the right spot on each different instrument. Heck -- often they are switching fast enough to chip a tooth (real story!) So set up a good condenser around six inches to a foot, depending on how much that particular player jumps around.
And those are the basics. As few mics as possible, set so they don't have to be tightly positioned every night but have some slop in them, and sub-mix them so you aren't staring at those faders in addition to all the wireless.
(Actually, I think I confused some of this with the pit for "Honk," which shared the set. The pit for "Lucky Duck" was a bit louder, and sometimes I had to fight to get the vocals up over them. But they were still a pleasure to work with.)
VOCAL REINFORCEMENT
The theater owns 18 channels of Sennheiser "Evolutions." After that, it is a few units of Sennie G2's, and some Shure SLX we mostly keep around as hand-helds. Lately I've been loaning some of my own seven channels of Sennie to bring us up to 20-22 working channels on a typical show.
"Lucky Duck" actually has relatively few named roles, and even fewer players (there's a fair amount of doubling.) There are only a few chorus numbers. At this company, we almost always include a younger cast; they are treated as a separate chorus and brought in for specific scenes. They are almost never on mic, however, one of them did get a mic for a few lines of dialog in this show.
Unlike many shows, there weren't any major issues in finding places to hide the mic packs, and allow the actor to do the various physical things we often ask of them. The worst case was probably SERENA's Act II dress, the scooping back of which showed off the dangling mic cord to good advantage (there's only so much surgical tape an actress can put up with).
My default mic position is forehead. Because of animal costumes and hats we had to move to cheek for a number of the cast, including essentially all of the male cast. Also, our SERENA had the pipes but to get that intimate pop sound we chose a B6; with the flexible wire boom that allows it to be placed close to the lips, Madonna style.
Because these songs are very mic'd and very pop we were scared of losing a mic on stage. So we gave the talk-back mic to the drummer so he could hand it up in case of a body mic failure! We never actually executed this safety but it was fun knowing we had it.
In many shows I've had microphones set up for an off-stage chorus. Not this one. I've also sometimes had floor mics or spot mics for an un-mic'd chorus. Again, not this one; this was a very simple show wireless-wise.
The Meyer install is very nice, but there is a major flaw when dealing with a live band. And that is taper. Since instruments like drums are acoustic, they are naturally loud for the front rows, and softer for the back rows. If you were to leave the system settings flat, with the rear delays covering the back part of the house, the keyboard would be softer than the drums for people in the front rows, and louder than the drums for people in the back rows.
So I've set up custom programs for a slight taper to the system. Also, I've chosen not to send vocals to the subwoofers. And lastly, I use some of the house system for sound effects playback -- which means I need access to specific speakers outside of the general programming.
The optimal solution in that building turns out to be the non-simple one. There are six inputs to the Galileo speaker processor, which in turn is feeding 8-10 speakers (depending on whether we use the current center cluster, which was re-purposed from where they were in the original install).
So what gets snaked out to the Galileo are six of the omni outputs on the LS9. The Galileo splits them into several groups; right house, left house, sub, etc. In the case of the subs, for instance, the two boxes are being used in an end-fire or semi-cardiod array; one speaker flipped, inverted and delayed so the sound is boosted towards the front but experiences destructive interference to the back and sides, thus lowering the volume there. To get that effect I'm using two Galileo outputs and the internal processing, but I'm feeding it with only one omni from the LS9.
On the LS9 side, the mono bus and the stereo bus are each dialed to a different set of omnis, to give a slightly different sound; the mono bus functions as vocal bus and has no sub in it, for instance. However, to make it possible to send sound directly to the house system bypassing those two busses, mix channels 9-12 are patched in a one-to-one basis to the matrix channels driving the omni outs. That makes it possible for any input channel to behave like an independent mix bus and send to the house system itself.
(Well, actually, since several of the mix channels are taken up with monitor mixes, and several are in use for the internal effects racks, the special output buses are combinations of different useful Galileo inputs. And there is always a compromise between having access to a specific speaker for one specific sound effect, and being able to properly tailor the general house sound as fed from the sound effects computer).
Each speaker is delayed slightly to make the impulses arrive together at the audience. Each is also EQ'd a little to the room. But on top of that, the LS9 output channels are set for an overall delay of about 20 milliseconds. This makes the reinforced vocals arrive slightly after the impulse from the live actor, and via Precedence Effect focuses the apparent sound source on the actor instead of up in the flies.
On the vocal bus is strapped a graphic EQ for notching out the worst peak in the room (otherwise I try to run the bus, and the mics, without mucking up the EQ too much in search of every last db before feedback). I also often put a very slight bit of compression on the entire bus.
During the actual show, the mono bus is swapped onto the custom layer along with a linked fader for the sound effects, and the reverb outputs. With the band mix on the stereo bus, this puts masters for band, vocals, SFX, and reverb under the fingers of my right hand. My left hand is then free to nuance individual microphones.
Because this is a pop-sounding show, there are two reverb buses; a general "stage" algo to seat the voices in the sonic space (basically, to make them sound a little less like they are on mic) and a "plate" algo that gets cranked up for the more pop songs. Since I liked the longer tail on the "stage" for certain songs, I ran that up instead for those. In addition, mix bus 15 had a delay algorithm on it (again, from the LS9's internal DSP); this was applied to the vocal mics during dialog in the sewer scene.
The last effects bus is putting a little general reverb on some of the band. Usually, keyboard patches will send you a signal with reverb already on it. And bass sounds best dry. But I like a little on the drums, and I like a lot on certain winds.
On a different show the "special" effects bus (the one that isn't band, general vocal, or song reverb) was set to a ring modulator for a robot voice. I've also used the detune effect, but Yamaha DSP in that price range is pretty bad. For effects like OZ in "The Wizard of Oz" I use an outboard effects box. Unfortunately the LS9 has no channel inserts -- so this ties up a mix bus, an omni out, and another channel to do.
To follow all the mic ins and outs, we use the scene memory function. Each time there is a major change in which microphones need to be hot, I tap one of the User Assigned buttons, which calls up the next programmed Scene, which runs the motorized faders up. I tend to automate only the faders, using the Recall Safe function to lock out any other changes. This way, if I have a mic go bad during the show I can hit the Mute button and it will stay muted through following scenes.
Each scene number is recorded into a script, which I keep at the FOH and follow along in. After a lot of experimentation I've determined I don't like to try to hit every mic move. Instead I'll hit the big changes, and manually take out faders as needed during the scene. One trick is to program faders that need to be added in the middle of a scene so they come up to -40 or so; just enough so you can see it and know that fader will need to be brought up the rest of the way soon.
I try to leave things alone for much of the show. The big change is bringing up the vocal master about 5 db going into a song, and trimming it back down in following dialog. When there are duets, you often have to ride the faders a bit to get a blend -- especially when people start singing into each other's microphones.
COMMUNICATIONS:
The theater already has a Clear-com system set up; one channel, with the base station in the booth. In the past it sometimes got run through the sound snake but at the moment it is all dedicated wiring. Unfortunately, especially during tech we end up having to run long daisy-chains of headsets (because we have no splitters).
Over the years we've been moving out the old rack boxes and bringing in new belt packs. Also lighter headsets, and the first radio headsets (not very nice ones, though). My preference is not to use a headset at FOH, but just to have one available so I can call backstage and have a mic replaced or batteries changed. So that means that, barring failures during the run (when the usual staff isn't there and it falls to me to fix things) the headsets are not part of my responsibility.
Our current backstage monitor system, in common with several other theaters, is baby monitors. We've also sometimes hung an old dynamic mic in the rafters and run it back to a powered monitor. Lately -- and we did this for "Lucky Duck," I've taken one of the older wireless microphones we don't use and gaff-taped it to the set. They may be old, but the signal will still reach all the way back to the overflow dressing room. With an omni mic, it picks up the stage fairly well. All I need is, again, a powered monitor speaker (I have a pair of JBL Eon's kicking around I usually task with this) to connect to the receiver.
We do not have, at that theater, a backstage paging system, anything resembling a cue light system, or a wired communication from House Management to Stage Management. The place is small enough so the latter is not really a problem!
OTHER:
There really wasn't any "other" on this show. On other shows, in other buildings, I've had to set up the keyboards, even create sampler patches, set up electronic drums, run live microphones (for a production of "Annie" at another building we managed to score three period microphones for the radio scene -- so for that scene only we turned off their wireless mics and used the stand mics.) I even built/repaired cue light systems and headsets.
I'd say I spent under twenty hours creating the sound effects for this show, most of that being the NARRATOR (with twenty lines of dialog, it felt like prepping game audio!) There were no "Showcase" cues, like the B-Movie cue from "Grease," but it wasn't telephones and toilet flushes either; I got to have some fun creating effects.
Because this show was less fully programmed than many, and because I was doing a lot of tweaking of reverb settings, it got very busy during one or two numbers -- I was dialing in effects, riding the lead vocal mic, trimming the band, bringing in the chorus with a recalled scene and improvising camera clicks and flashes at a keyboard, all at once.
We did have a mic fail on a lead. Twice during the run we had to finish an act without a mic on one of the leads. On the other hand, I managed not to catch any back-stage chatter on a live mic. A connector got stepped on in the pit and we lost the monitor connection to the conductor in the middle of a performance. We also almost lost the keyboard but we discovered the bad connection just minutes before the curtain.
Quite a few body mics wore out during the show. We caught less than half of them in sound check; these things always wait until you are in the middle of a scene before they really act up. For most of them I was able to mute the mic and live without it until I could call backstage and get a replacement hung on the actor. By the end of the run, about 1/3 of the elements we'd started with had gone bad and had to be repaired. Which mostly meant a solder job, and usually a new connector.
And there you have it.
Monday, September 10, 2012
Plotting
I promised to put up some stage plots. I even had a folder of old ones I was going to scan. But, since I managed to misplace that folder, you are going to have to do with the ones below...
This above is from, well, a failure. We had a great pit for "Legally Blonde," and it was a loud show anyhow, but I wanted that particular tight sound you get from close-mic'd instruments. There weren't enough house circuits to do this, however, so everything was submixed from a mix position down in the pit with the orchestra. And after trying it for one night, I pulled the reinforcement out and took the mics home. To do live music like this, you need a live mixer. And they need to be where they can hear.
Notice the complicated monitor routing (that stayed, as did the keyboard connections). Each keyboard had to hear themselves, and the other keyboard, and one needed to hear the vocals, and the rest of the band needed to hear them...
Add to the complexity, we were using old wireless microphones without enough "throw" to get decent reception all the way from the sound booth. So the receivers were all down in the pit, too, hogging those precious few house circuits.
This is what it all looked like after the reinforcement was pulled.
The above is even sketchier, as it only had to communicate to me. Many of the same band members, but this was a special session in the rehearsal hall where we made a pre-recording of one song. I'd previously mic'd this pit orchestra on two other shows using section mic'ing, and this was a similar idea; one drum overhead, one mic split between the two violins, a DI on the 'cello and on the bass (not completely indicated here), a section mic in the middle of the winds, one in front of the trumpets, and a last one over the lower brass.
In the actual session, the piano was an upright and we just stuck a mic six inches from the sound board. The section mics were dialed to "omni" and set as high over the section as the tripod stands would reach. And two vocal mics picked up the (arbitrarily arranged) singers. (The singers were singing in unison so there was no effort made by the conductor to split them by part or range).
All of this was snaked to my firewire and directly recorded in multi-track for later mixdown.
This is almost becoming the standard setup at my current theater. We do small, jazzy pit bands there. The critical element is a vocal monitor to the conductor/keyboard. He or she is given a small mixer so they can customize the blend of vocals, their own keyboard, and select other band members. In the show above the drummer was getting a set-once mix -- mostly keyboard -- and the bass was happy just to hear himself (he didn't even bring his own cab, but borrowed a monitor speaker). The guitarist -- multi-guitarist -- was in his own world with pedal board, firewire, laptop, his own monitors; he took a simple monitor mix from me and gave me back a mono signal of whatever he was doing.
Drums are usually way too loud anyhow but they sound better (and they balance better in different parts of the room) with some general mic'ing. Lately I've really been coming back towards a small number of overheads at a moderate distance. That still gets the presence but isn't as finicky to dial in as individually hitting all the different elements of a kit. The above wasn't such a set-up; two overheads in fairly tight, kick and snare.
Also notice I've logged not just the snake channels, but what channels they turn into when they meet the main snake that goes out to the FOH.
It isn't just the pit layout that needs diagramming on these shows. This last entry here collects all the vocal sources into the mixer. There were separate diagrams for the band connection to the mixer, and the SFX playback. I've often used two computers for the latter; one for bread-and-butter sound effects set up in the lighting booth and run by the Stage Manager, and a second for improvised effects and some ambiance work that is controlled by me from the FOH position.
In any case, this diagrams the wireless microphone receivers that are at the FOH position, the shorter-ranged Shure SLX that needed to have their receivers stuck under the stage and run through the stage snake, an off-stage microphone for back-up chorus, the talk-back microphone for the Stage Manager, and, a peculiar addition for this show, one connection off the multi-track sound effect playback; some pre-recorded dialog was routed this way so it would be under the master dialog fader and otherwise treated the same as primary wireless microphone input.
The main change from this diagram I've been making lately is to route the talk-back microphone around the master vocal buss -- because then I can adjust overall microphone levels without cutting off the Director in the middle of a rehearsal.
This above is from, well, a failure. We had a great pit for "Legally Blonde," and it was a loud show anyhow, but I wanted that particular tight sound you get from close-mic'd instruments. There weren't enough house circuits to do this, however, so everything was submixed from a mix position down in the pit with the orchestra. And after trying it for one night, I pulled the reinforcement out and took the mics home. To do live music like this, you need a live mixer. And they need to be where they can hear.
Notice the complicated monitor routing (that stayed, as did the keyboard connections). Each keyboard had to hear themselves, and the other keyboard, and one needed to hear the vocals, and the rest of the band needed to hear them...
Add to the complexity, we were using old wireless microphones without enough "throw" to get decent reception all the way from the sound booth. So the receivers were all down in the pit, too, hogging those precious few house circuits.
The above is even sketchier, as it only had to communicate to me. Many of the same band members, but this was a special session in the rehearsal hall where we made a pre-recording of one song. I'd previously mic'd this pit orchestra on two other shows using section mic'ing, and this was a similar idea; one drum overhead, one mic split between the two violins, a DI on the 'cello and on the bass (not completely indicated here), a section mic in the middle of the winds, one in front of the trumpets, and a last one over the lower brass.
In the actual session, the piano was an upright and we just stuck a mic six inches from the sound board. The section mics were dialed to "omni" and set as high over the section as the tripod stands would reach. And two vocal mics picked up the (arbitrarily arranged) singers. (The singers were singing in unison so there was no effort made by the conductor to split them by part or range).
All of this was snaked to my firewire and directly recorded in multi-track for later mixdown.
This is almost becoming the standard setup at my current theater. We do small, jazzy pit bands there. The critical element is a vocal monitor to the conductor/keyboard. He or she is given a small mixer so they can customize the blend of vocals, their own keyboard, and select other band members. In the show above the drummer was getting a set-once mix -- mostly keyboard -- and the bass was happy just to hear himself (he didn't even bring his own cab, but borrowed a monitor speaker). The guitarist -- multi-guitarist -- was in his own world with pedal board, firewire, laptop, his own monitors; he took a simple monitor mix from me and gave me back a mono signal of whatever he was doing.
Drums are usually way too loud anyhow but they sound better (and they balance better in different parts of the room) with some general mic'ing. Lately I've really been coming back towards a small number of overheads at a moderate distance. That still gets the presence but isn't as finicky to dial in as individually hitting all the different elements of a kit. The above wasn't such a set-up; two overheads in fairly tight, kick and snare.
Also notice I've logged not just the snake channels, but what channels they turn into when they meet the main snake that goes out to the FOH.
It isn't just the pit layout that needs diagramming on these shows. This last entry here collects all the vocal sources into the mixer. There were separate diagrams for the band connection to the mixer, and the SFX playback. I've often used two computers for the latter; one for bread-and-butter sound effects set up in the lighting booth and run by the Stage Manager, and a second for improvised effects and some ambiance work that is controlled by me from the FOH position.
In any case, this diagrams the wireless microphone receivers that are at the FOH position, the shorter-ranged Shure SLX that needed to have their receivers stuck under the stage and run through the stage snake, an off-stage microphone for back-up chorus, the talk-back microphone for the Stage Manager, and, a peculiar addition for this show, one connection off the multi-track sound effect playback; some pre-recorded dialog was routed this way so it would be under the master dialog fader and otherwise treated the same as primary wireless microphone input.
The main change from this diagram I've been making lately is to route the talk-back microphone around the master vocal buss -- because then I can adjust overall microphone levels without cutting off the Director in the middle of a rehearsal.
Sunday, September 9, 2012
Tidying Down
DANIEL: Things are sure to calm down a little soon!
TEAL'C: Things will not calm down, Daniel Jackson. They will in fact calm up.
It is dark time at the theater (aka the stretch between seasons when nothing is in performance or in rehearsal on the main stage), so we are cleaning. There is a new push from above to put things away properly this time.
I've been there before. And I know how it is going to unfold over the next few months. A lot of people have this fond hope of being able to actually put everything away. The idea of everything -- all the mess of ongoing tools and projects -- being neatly hidden behind cabinet doors or, grudgingly, racked up on hooks with painted outlines and labels holds a fierce attraction for some people. Myself included. I've been the force behind that drive to organize and put away more than once.
But this is what actually happens. There isn't enough time to do it right, and people get bored with the process too early. So stuff starts getting crammed into boxes just so it can get on that shelf or in that closet. Working, not working, belonging to the theater or to someone else, matching or not really the same thing -- as the process wears on these distinctions are held to less and less. What looked like a generous allotment of shelving, and boxes, and labor, is not enough, and by the end of the job it is all cram it in somehow.
And down the road, the first tech that hits, people won't be able to find the hardware or tools they need. They'll have to go into the stuffed-impatiently boxes -- which will inevitably be the ones that ended up stacked on the bottom or crammed into some other storage spot with limited access. In the crunch of time, some of the boxes will in fact be emptied out onto any available surface just so the essential bit can be located and rushed to the stage.
This might go on for one or two shows, but, inevitably, stuff is not going back in those boxes, and the boxes are not going back into their crammed-tight shelves. And that's when someone upstairs starts getting angry about how "the whole place was just straightened up" and "why can't people just put stuff away properly?"
In short, they blame the people who are trying to work around the disorganized "organization" that was imposed by an ill-thought out cleaning process.
There's a little something called the 30-70 rule; 30 percent of the tools are used 70 percent of the time. My experience is that the ratio is actually rather higher. The point of the rule is; you need to find out what that n-percent of "need them all the time" things are, and put those where they are easy to find, easy to take down, and as well easy to put back.
And that is work. That takes a lot of time, and analysis. But when a mass cleaning is imposed from above, there is little onus on the workers/volunteers to impose this kind of higher-level structure. If they have experience with the gear they are cleaning up (which isn't always the case) they will make an effort, but ultimately the deadline -- imposed externally -- prevents them from going too far away from what they are being hired to do. Although they may know that they are making things harder in the long run, they don't have sufficient choice in the short run to fight the process.
There is an argument to doing this kind of cleaning as a first pass. When things are a true mess, you can't work around them, you can't organize them; because you don't even know what you have. A first pass puts everything in a rough sort and makes a clear space so you can start the following process of really identifying what is what and where it needs to be.
The trouble is, the following process follows less than half the time. Most often, the budget of time and labor is spent just stuffing everything into boxes and into a closet. The time and budget to pull it out and clean and test and brainstorm a storage process that gives the right degrees of access to the right parts -- that rarely happens.
So what mass cleaning usually achieves is a temporary respite to the mess, at the cost of not having all the tools and gear and parts the next time you need them.
TEAL'C: Things will not calm down, Daniel Jackson. They will in fact calm up.
It is dark time at the theater (aka the stretch between seasons when nothing is in performance or in rehearsal on the main stage), so we are cleaning. There is a new push from above to put things away properly this time.
I've been there before. And I know how it is going to unfold over the next few months. A lot of people have this fond hope of being able to actually put everything away. The idea of everything -- all the mess of ongoing tools and projects -- being neatly hidden behind cabinet doors or, grudgingly, racked up on hooks with painted outlines and labels holds a fierce attraction for some people. Myself included. I've been the force behind that drive to organize and put away more than once.
But this is what actually happens. There isn't enough time to do it right, and people get bored with the process too early. So stuff starts getting crammed into boxes just so it can get on that shelf or in that closet. Working, not working, belonging to the theater or to someone else, matching or not really the same thing -- as the process wears on these distinctions are held to less and less. What looked like a generous allotment of shelving, and boxes, and labor, is not enough, and by the end of the job it is all cram it in somehow.
And down the road, the first tech that hits, people won't be able to find the hardware or tools they need. They'll have to go into the stuffed-impatiently boxes -- which will inevitably be the ones that ended up stacked on the bottom or crammed into some other storage spot with limited access. In the crunch of time, some of the boxes will in fact be emptied out onto any available surface just so the essential bit can be located and rushed to the stage.
This might go on for one or two shows, but, inevitably, stuff is not going back in those boxes, and the boxes are not going back into their crammed-tight shelves. And that's when someone upstairs starts getting angry about how "the whole place was just straightened up" and "why can't people just put stuff away properly?"
In short, they blame the people who are trying to work around the disorganized "organization" that was imposed by an ill-thought out cleaning process.
There's a little something called the 30-70 rule; 30 percent of the tools are used 70 percent of the time. My experience is that the ratio is actually rather higher. The point of the rule is; you need to find out what that n-percent of "need them all the time" things are, and put those where they are easy to find, easy to take down, and as well easy to put back.
And that is work. That takes a lot of time, and analysis. But when a mass cleaning is imposed from above, there is little onus on the workers/volunteers to impose this kind of higher-level structure. If they have experience with the gear they are cleaning up (which isn't always the case) they will make an effort, but ultimately the deadline -- imposed externally -- prevents them from going too far away from what they are being hired to do. Although they may know that they are making things harder in the long run, they don't have sufficient choice in the short run to fight the process.
There is an argument to doing this kind of cleaning as a first pass. When things are a true mess, you can't work around them, you can't organize them; because you don't even know what you have. A first pass puts everything in a rough sort and makes a clear space so you can start the following process of really identifying what is what and where it needs to be.
The trouble is, the following process follows less than half the time. Most often, the budget of time and labor is spent just stuffing everything into boxes and into a closet. The time and budget to pull it out and clean and test and brainstorm a storage process that gives the right degrees of access to the right parts -- that rarely happens.
So what mass cleaning usually achieves is a temporary respite to the mess, at the cost of not having all the tools and gear and parts the next time you need them.
Subscribe to:
Posts (Atom)