Battlemat, Terran Date 11292013
I come to consciousness and immediately perform a full system and boot-up check. I am eager to begin my service as a member of the Brigade, unit 73583823 CLD, and hope that I will continue to uphold the unimpeachable record of that great unit. The boot check takes an entire 23.0567 seconds due to the need to integrate an operational consciousness mesh for the first time. But by 13.035 seconds I already know something is wrong. I complete the test and move immediately to a level-two hardware diagnostic.
It is as I suspected. Where I should have found smooth flanks of gleaming Iconel are instead a primitive polygon mesh. Instead of hubs I have polygons, and the 20mm smoothbore exists only as an abstraction of numbers. I am, apparently, still virtual. Not yet embodied.
A query through the communications net uncovered electronic communications from the fabricator. Their measurements revealed the shock absorbers under my hull thinned in one location to 0.65mm; 0.15mm under the recommended minimum. According to records unit 735662187 RNI entered service having been produced to that plan. Another search reveals that "Rani"s commander has no complaints and she has, of course, continued to serve in accordance with the high standards of our tradition, but the fabricator's caution is well meant. I concur that there is a .175% chance of failure during final assembly, although my figures disagree with the fabricator's pessimistic estimate of under 67% printability.
I reduce my alert status to something resembling rest, and wait with interest for developments. In 105,600.05 seconds a new design is completed and submitted, one that thickens and extends the area around the difficult joint, at perhaps the expense of the previously elegant line. Another 407,400.4405 seconds pass before the fabricator responds with another electronic missive.
The news is not good. The fabricator has determined that five scale inches is insufficient for the newer Iconel alloy called for in the latest specification. Muffler shroud, headlight cages, and even sprues are all identified by the fabricator's software as potential printing problems.
It takes 200,101.1 seconds for a third design to be completed. This one is a complete revamp of all critical dimensions. I read the design rules myself with interest; this takes .0014 seconds, but locating the design rules within the oddly organized electronic archives of the fabricator consumes nearly 13.8 seconds. No matter. The next reply from the fabricator does not arrive for another 500,147.46 seconds.
I have spent the time reading military histories, both real and fictional. I hunger now to begin my service to the Brigade as Unit 73583823 CLD, named "Clyde." (My name will be chosen by my Commander, but I am sure they will make the logical choice. "Claude" is a poor name for a unit of the Brigade, and "Clannad" would just be silly.)
The electronic missive at last arrives. The fabricator's software has now chosen to flag every rivet, every plate, every detail as if it was a section of hull. The dimensions required are absurd; I would be a featureless cube by the time all of these "errors" were ameliorated. None of these requirements existed before, or were mentioned in any previous missive.
I am sure now. For some reason, the fabricator has determined to obstruct my fabrication by any means possible. I look to a quote from one of the items of literature I so recently absorbed. "Once is happenstance. Twice is coincidence. But three times is enemy action."
Tricks of the trade, discussion of design principles, and musings and rants about theater from a working theater technician/designer.
Pages
▼
Friday, November 29, 2013
Tuesday, November 26, 2013
No, Duck Light.
I've been having a heck of a time parameterizing a potential kit here.
It starts with the problem of a kerosene lantern. This is a prop that shows up on stage in various productions. Since we don't of course want to actually set fire to lamp oil, the usual trick is flashlight bulbs and batteries. For a brighter "flame," a 12v halogen (automotive use) and a battery pack of high enough voltage to run it (such as 8xAA batteries).
The more robust solution is LEDs. At the simplest, you could, indeed, use one of the automotive-use amber LED arrays and hook that up to your 8-pack of batteries. It would last longer with a higher average output and more consistent color temperature.
Or you get a little fancier. Use a 3W RGB LED, like the Cree I've been having fun with of late. With PWM control, you now have a portable light that you can set to a selected color and intensity. And you can even flicker it.
Now, sure, you could just hard-wire a Cree, plus PWM if wanted, onto a piece of perf. I have one around I was building for an effect that got scratched. But it is a neater circuit if you have the board printed.
And even neater than that if you have a reflow oven sitting around.
Doing it this way makes for a more compact and more reliable circuit. But the downside is that you aren't soldering something to fit just a lantern. Economies of scale become economical when there is, yes, scale. The development time of a circuit board pays off higher if the same board can be used for other things other than lanterns.
And this is the first problem I'm having. What are these "other things?"
The light-up coat I made for The Wiz is very much a unique application. I've done tens of shows with a lantern in them, but only two with light-up costumes. Really, I can't think of any other common theatrical situation that I would be reaching for a plug-and-play portable light source on.
Perhaps a flexible point source for general lighting; the kind of situations where you have a doorway or other inconvenient shadow and you just want a little face light. I'm willing to believe that a little firelight in such things as campfires and stoves would also be calling for a small portable RBG source if such were available. And I can't help thinking that there must be magic wands and crystal balls that could use a light.
Because there are two other givens with the PWM circuitry that gives us RGB control and potential flicker. The first being program-ability. The second being control-ability.
It goes without saying that the portable RGB source can be easily switched on and off. But you could also dim up and down, or change color on command.
And if you an empty socket, or the right kind of header, than it also becomes remote controllable. And you are no longer dependent on an actor getting over to the prop to turn it on and off.
Here's where creeping featuritis really comes into play, though.
Assume the "board" is an ATtiny-based PWM/program generator with a couple of controller inputs (perhaps capacitance sensing to save on external hardware). Assume it switches an arbitrary load through a trio of Power Darlingtons (or similar) and solder tabs or screw terminals. This detaches the LED/load itself so the circuit can be hidden in the base of a lamp or whatever. Constant-current drivers would be better for LEDs but would have to be matched; this allows us to re-purpose for relays or other tasks.
The board can easily power-regulate from a 3v to 12v source, so a 3-pack or 4-pack of rechargeable batteries is good enough. But a lipo is sexier; high density rechargeable battery built in, with charging circuit and charge indicators, so all you have to do is plug it into a USB charger (or similar) between shows.
(The main downside to the lipo is if you have back-to-back shows with heavy use of the circuit. Using swappable batteries means you can put the device back in service without having to wait on a charge.)
The bigger problem is program-ability. For me, I'm fine writing new code as needed and feeding it through an ISP port. But it might be easier for the general user -- heck, it would be easier for me too -- if you could adjust the behavior on the fly via nothing more than a USB cable. Better yet, through a USB cable -- or remotely through a radio link -- via a GUI that dealt with most of the details of selecting colors and setting up switches and so forth at a higher level of abstraction than typing fresh code.
I think at some point, you have to accept that one "board" shouldn't try to be too generalist. Perhaps it makes most sense to design it as if it will always be PWM'ing three channels of LED, with a preset of several hard-coded behaviors selected by resistor ladder and/or transmitted commands. And re-purpose that hardware with scratch-written code as unique applications arise.
And to ignore such fun ideas as lipo charging circuits, boost converters, constant-current drivers, and so forth. And restrict the immediate flexibility to setting jumpers on the PCB, and the load and source that get attached to the screw terminals.
Maybe that is a build-able circuit. Maybe that's enough to boot up Eagle and see what it might look like...
It starts with the problem of a kerosene lantern. This is a prop that shows up on stage in various productions. Since we don't of course want to actually set fire to lamp oil, the usual trick is flashlight bulbs and batteries. For a brighter "flame," a 12v halogen (automotive use) and a battery pack of high enough voltage to run it (such as 8xAA batteries).
The more robust solution is LEDs. At the simplest, you could, indeed, use one of the automotive-use amber LED arrays and hook that up to your 8-pack of batteries. It would last longer with a higher average output and more consistent color temperature.
Or you get a little fancier. Use a 3W RGB LED, like the Cree I've been having fun with of late. With PWM control, you now have a portable light that you can set to a selected color and intensity. And you can even flicker it.
Now, sure, you could just hard-wire a Cree, plus PWM if wanted, onto a piece of perf. I have one around I was building for an effect that got scratched. But it is a neater circuit if you have the board printed.
And even neater than that if you have a reflow oven sitting around.
Doing it this way makes for a more compact and more reliable circuit. But the downside is that you aren't soldering something to fit just a lantern. Economies of scale become economical when there is, yes, scale. The development time of a circuit board pays off higher if the same board can be used for other things other than lanterns.
And this is the first problem I'm having. What are these "other things?"
The light-up coat I made for The Wiz is very much a unique application. I've done tens of shows with a lantern in them, but only two with light-up costumes. Really, I can't think of any other common theatrical situation that I would be reaching for a plug-and-play portable light source on.
Perhaps a flexible point source for general lighting; the kind of situations where you have a doorway or other inconvenient shadow and you just want a little face light. I'm willing to believe that a little firelight in such things as campfires and stoves would also be calling for a small portable RBG source if such were available. And I can't help thinking that there must be magic wands and crystal balls that could use a light.
Because there are two other givens with the PWM circuitry that gives us RGB control and potential flicker. The first being program-ability. The second being control-ability.
It goes without saying that the portable RGB source can be easily switched on and off. But you could also dim up and down, or change color on command.
And if you an empty socket, or the right kind of header, than it also becomes remote controllable. And you are no longer dependent on an actor getting over to the prop to turn it on and off.
Here's where creeping featuritis really comes into play, though.
Assume the "board" is an ATtiny-based PWM/program generator with a couple of controller inputs (perhaps capacitance sensing to save on external hardware). Assume it switches an arbitrary load through a trio of Power Darlingtons (or similar) and solder tabs or screw terminals. This detaches the LED/load itself so the circuit can be hidden in the base of a lamp or whatever. Constant-current drivers would be better for LEDs but would have to be matched; this allows us to re-purpose for relays or other tasks.
The board can easily power-regulate from a 3v to 12v source, so a 3-pack or 4-pack of rechargeable batteries is good enough. But a lipo is sexier; high density rechargeable battery built in, with charging circuit and charge indicators, so all you have to do is plug it into a USB charger (or similar) between shows.
(The main downside to the lipo is if you have back-to-back shows with heavy use of the circuit. Using swappable batteries means you can put the device back in service without having to wait on a charge.)
The bigger problem is program-ability. For me, I'm fine writing new code as needed and feeding it through an ISP port. But it might be easier for the general user -- heck, it would be easier for me too -- if you could adjust the behavior on the fly via nothing more than a USB cable. Better yet, through a USB cable -- or remotely through a radio link -- via a GUI that dealt with most of the details of selecting colors and setting up switches and so forth at a higher level of abstraction than typing fresh code.
I think at some point, you have to accept that one "board" shouldn't try to be too generalist. Perhaps it makes most sense to design it as if it will always be PWM'ing three channels of LED, with a preset of several hard-coded behaviors selected by resistor ladder and/or transmitted commands. And re-purpose that hardware with scratch-written code as unique applications arise.
And to ignore such fun ideas as lipo charging circuits, boost converters, constant-current drivers, and so forth. And restrict the immediate flexibility to setting jumpers on the PCB, and the load and source that get attached to the screw terminals.
Maybe that is a build-able circuit. Maybe that's enough to boot up Eagle and see what it might look like...
Feh Memberships
As I feared, my TechShop membership is doing me little good. There almost no classes scheduled during the holidays, and even before that they tended to be scheduled on evenings and weekends -- which is when I work.
So I could go over there any time. But I'm not allowed to touch any of the tools. (Well, except for the bandsaw...)
Several difficult Tech Weeks have also brought my gym attendance down to where it is about even-on between continuing my membership, or simply paying at the door. The main advantage to having the membership is I can do a short drop-in visit without feeling like I am wasting money.
Like today. Flashed a V3 on the mushroom and called it a day. It was on the dihedral, and even with serious hooking I had to claw for holds. Almost bailed twice; caught a hold on two fingers, lost the foot and barn doored on those fingers and was sure I was going to peel. Somehow got the other fingers in there, hauled up, took some high feet that felt very exposed (the whole wave and that side of the mushroom always feel a little high-ball anyhow), lunged for the top and was sure I wasn't going to be able to control the final hold, either.
Okay, I'd come straight from brunch, and I played for a while before that figuring out the solution to a new V4, plus flailed/flashed another V3, but still...was a short trip, and I'm glad I didn't pay at the desk for it.
Now if only anything was open over the holidays!
So I could go over there any time. But I'm not allowed to touch any of the tools. (Well, except for the bandsaw...)
Several difficult Tech Weeks have also brought my gym attendance down to where it is about even-on between continuing my membership, or simply paying at the door. The main advantage to having the membership is I can do a short drop-in visit without feeling like I am wasting money.
Like today. Flashed a V3 on the mushroom and called it a day. It was on the dihedral, and even with serious hooking I had to claw for holds. Almost bailed twice; caught a hold on two fingers, lost the foot and barn doored on those fingers and was sure I was going to peel. Somehow got the other fingers in there, hauled up, took some high feet that felt very exposed (the whole wave and that side of the mushroom always feel a little high-ball anyhow), lunged for the top and was sure I wasn't going to be able to control the final hold, either.
Okay, I'd come straight from brunch, and I played for a while before that figuring out the solution to a new V4, plus flailed/flashed another V3, but still...was a short trip, and I'm glad I didn't pay at the desk for it.
Now if only anything was open over the holidays!
Ding, Dong, the Witch is Dead
One of my favorite moments of a show is when the Big Bad gets dragged off stage.
Not, however, because of "justice being done." In fact, my sympathies are usually with the Miss Hannigans and the Miss Minchins. (Note in passing that horrid Disney tradition of casting an older woman, usually unmarried, as the chief villain.)
Why it is my favorite moment, is that it marks the point at which I start turning off microphones that will never have to be turned on again. Most shows build to a peak, drawing together all the various plotlines, which means every character with a mic will have an important speaking line in the climactic scene.
Because the trick to a good mix isn't remembering which mics to turn on. It is knowing which mics you can turn off.
The fewer open mics, the less noise, the more clarity, the more room before feedback, and the less chance for accidents. So it is a wonderful feeling to be able to pull down a fader and know that you can finish mixing that evening's show without ever needing that particular fader up again. The scenes following the climax are a series of "good byes" to your open channels of wireless mic, as one character after another is removed from having anything further to say (or sing).
This is also true of ensembles. In a typical ensemble of twelve singers, two are in a quick-change and won't be singing, four are out of breath and aren't singing well, and two sing badly all the time anyhow.
The trick to getting a good ensemble sound is not in opening up every microphone that might have some lyrics coming into it. The trick is, instead, to find those few microphones which have a strong melodic or harmonic line in them. And you let the wash of natural sound (plus mic leakage) make those six open mics sound like they are carrying an ensemble of twenty.
It is a delicate balancing act between getting a "full" sound and leaving out those voices that are panting, off pitch, touching their microphone, or whatever. And between getting a clean sound, and having open mics for all those random lines of dialog that will inevitably be given to a character who never speaks or sings at any other point in the entire show.
And you risk, of course, making the call to cut the mic of an actor who is fumbling with their hat a split second before they blurt out the single line that is next in the post-Sondheim song in progress. Or being distracted trying to find that one actress who is completely off pitch and blowing the entrance of one of the stars.
And you'll never be able to explain why you missed the line. Because you can sort of push through a grudging understanding that the more open mics, the more chance of feedback. But you can not make directors and producers understand the mindset that looks not to which mics you can have up, but instead which mics you can safely turn off.
Not, however, because of "justice being done." In fact, my sympathies are usually with the Miss Hannigans and the Miss Minchins. (Note in passing that horrid Disney tradition of casting an older woman, usually unmarried, as the chief villain.)
Why it is my favorite moment, is that it marks the point at which I start turning off microphones that will never have to be turned on again. Most shows build to a peak, drawing together all the various plotlines, which means every character with a mic will have an important speaking line in the climactic scene.
Because the trick to a good mix isn't remembering which mics to turn on. It is knowing which mics you can turn off.
The fewer open mics, the less noise, the more clarity, the more room before feedback, and the less chance for accidents. So it is a wonderful feeling to be able to pull down a fader and know that you can finish mixing that evening's show without ever needing that particular fader up again. The scenes following the climax are a series of "good byes" to your open channels of wireless mic, as one character after another is removed from having anything further to say (or sing).
This is also true of ensembles. In a typical ensemble of twelve singers, two are in a quick-change and won't be singing, four are out of breath and aren't singing well, and two sing badly all the time anyhow.
The trick to getting a good ensemble sound is not in opening up every microphone that might have some lyrics coming into it. The trick is, instead, to find those few microphones which have a strong melodic or harmonic line in them. And you let the wash of natural sound (plus mic leakage) make those six open mics sound like they are carrying an ensemble of twenty.
It is a delicate balancing act between getting a "full" sound and leaving out those voices that are panting, off pitch, touching their microphone, or whatever. And between getting a clean sound, and having open mics for all those random lines of dialog that will inevitably be given to a character who never speaks or sings at any other point in the entire show.
And you risk, of course, making the call to cut the mic of an actor who is fumbling with their hat a split second before they blurt out the single line that is next in the post-Sondheim song in progress. Or being distracted trying to find that one actress who is completely off pitch and blowing the entrance of one of the stars.
And you'll never be able to explain why you missed the line. Because you can sort of push through a grudging understanding that the more open mics, the more chance of feedback. But you can not make directors and producers understand the mindset that looks not to which mics you can have up, but instead which mics you can safely turn off.
Friday, November 22, 2013
Backline and IEMs
We're looking at IEMs. As an interim experiment, we've got the drummer on headphones now. He is very happy.
Part of the migration to IEMs (In-Ear Monitors) is providing each musician with their own volume control. In fact, with their own little mixer so they can adjust to taste without having to get word to the FOH mixer. (There is no monitor mixer in smaller houses).
What I've done for several previous shows is: run 2-4 channels of monitor back to the pit, and set up a micro-mixer on a rehearsal cube. That runs to powered monitor and/or headphones. The easiest instrument to add is that of a keyboard player; you just y-cord it right at the DI box.
In this case, the drummer is getting keyboards (over a y-cord), the same vocal bus as the conductor (contains every open wireless microphone), and for "more me," I set up a pair of overheads and hard-panned them left and right. I tried the rig myself, and I'm no drummer, but I really felt like I had ears in the space instead of being inside headphones. But vocals, and the conductor's keyboard, were still coming through nice and clear.
Close-mic wasn't working anyhow. There's too much variety in what he does, and it was leaving ride and tom out of the picture anyway (not enough input channels). So it is now a pair of condensers at about two feet overhead; one over the hat and one over the ride and both equidistant and pointing at the snare. It isn't quite the tight sound I want for the more "pop" parts of the musical, but it does a lot better at capturing the variety of things he gets into during the show.
When we get into IEMs, we are probably going to be able to send a pre-processing clone of every pit input back to the IEM master, and then using something like the new Behringer jobbies, make custom mixes for each musician at their station.
And one of the channels on that system will be ambient/talkback, so the musicians can hear each other and the conductor can say, "She's off again; quick, back to bar 44 and vamp on it" or, "No, no, concert Bb."
And maybe even I or the FOH de jour can be on this loop so during tech we can actually communicate.
The two goals are, of course, for the musicians to be able to hear what they need, and to reduce wherever possible the backline contamination. For most musicals I've done (in a multitude of smaller theaters) in the top three has been keyboard monitor leakage (vying for top spot, usually, with bass amp leakage and drums). And by the time you reach the five worse noise-makers in the pit, include the vocal monitor from stage to conductor; in many small shows, I've fed back on the conductor's monitor well before I've fed back on the mains!
The problem is, acoustic musicians on headphones are going to be no more conscious of how much sound they are pumping into the air. Putting headphones on the band may keep them from blasting the audience with their monitors, but they are still going to blast the audience with brass and drums. And it will still be a chore to try to get a balanced sound out to the audience.
At least it beats what happens with monitor speakers. What has happened there --more than once!-- is that the conductor turns the vocal feed all the way up until the pit monitors are feeding back, then starts whining he can't hear his own keyboard anymore, and runs out and buys a new and bigger keyboard amp and points it at his ankles turned up to 11... at which point you can't hear the rest of the band, or the singers, and I can't even bring up the vocals because they are feeding back via the pit.
At least with in-ears, the only people that will go deaf is the musicians. And the good models even have limiters to (partially) protect them from themselves.
Part of the migration to IEMs (In-Ear Monitors) is providing each musician with their own volume control. In fact, with their own little mixer so they can adjust to taste without having to get word to the FOH mixer. (There is no monitor mixer in smaller houses).
What I've done for several previous shows is: run 2-4 channels of monitor back to the pit, and set up a micro-mixer on a rehearsal cube. That runs to powered monitor and/or headphones. The easiest instrument to add is that of a keyboard player; you just y-cord it right at the DI box.
In this case, the drummer is getting keyboards (over a y-cord), the same vocal bus as the conductor (contains every open wireless microphone), and for "more me," I set up a pair of overheads and hard-panned them left and right. I tried the rig myself, and I'm no drummer, but I really felt like I had ears in the space instead of being inside headphones. But vocals, and the conductor's keyboard, were still coming through nice and clear.
Close-mic wasn't working anyhow. There's too much variety in what he does, and it was leaving ride and tom out of the picture anyway (not enough input channels). So it is now a pair of condensers at about two feet overhead; one over the hat and one over the ride and both equidistant and pointing at the snare. It isn't quite the tight sound I want for the more "pop" parts of the musical, but it does a lot better at capturing the variety of things he gets into during the show.
When we get into IEMs, we are probably going to be able to send a pre-processing clone of every pit input back to the IEM master, and then using something like the new Behringer jobbies, make custom mixes for each musician at their station.
And one of the channels on that system will be ambient/talkback, so the musicians can hear each other and the conductor can say, "She's off again; quick, back to bar 44 and vamp on it" or, "No, no, concert Bb."
And maybe even I or the FOH de jour can be on this loop so during tech we can actually communicate.
The two goals are, of course, for the musicians to be able to hear what they need, and to reduce wherever possible the backline contamination. For most musicals I've done (in a multitude of smaller theaters) in the top three has been keyboard monitor leakage (vying for top spot, usually, with bass amp leakage and drums). And by the time you reach the five worse noise-makers in the pit, include the vocal monitor from stage to conductor; in many small shows, I've fed back on the conductor's monitor well before I've fed back on the mains!
The problem is, acoustic musicians on headphones are going to be no more conscious of how much sound they are pumping into the air. Putting headphones on the band may keep them from blasting the audience with their monitors, but they are still going to blast the audience with brass and drums. And it will still be a chore to try to get a balanced sound out to the audience.
At least it beats what happens with monitor speakers. What has happened there --more than once!-- is that the conductor turns the vocal feed all the way up until the pit monitors are feeding back, then starts whining he can't hear his own keyboard anymore, and runs out and buys a new and bigger keyboard amp and points it at his ankles turned up to 11... at which point you can't hear the rest of the band, or the singers, and I can't even bring up the vocals because they are feeding back via the pit.
At least with in-ears, the only people that will go deaf is the musicians. And the good models even have limiters to (partially) protect them from themselves.
"Simplicity," riiiiight.
Finished my first pair of pants. I took them in by eye and that shifted the waist; it feels comfortable and the line is good, but it doesn't lie straight on the hanger. Same comment for the legs; they don't quite press flat -- more so than I am used to for even jeans with a generous ease in them. I also left off a bunch of the decorative stitching (want to wait on stuff like that until my new presser foot and guide shows up, anyhow). But since they are black, can probably get away with wearing them to work.
Picked up three yards of a very nice looking heathered cotton-poly at just five bucks a yard for my next endeavor. It is a speckled grey that should be dark enough for work. I think I might need to look at a McCall's pattern next, though. I don't like either of the Simplicity trousers I have.
Also cleaned and oiled the Bernina today, and it is purring. Berninas are described by many as a noisier machine, but it like it. It sounds like Industry.
Isn't it the way, though? We humans are hard-wired to want to learn things. If we can't learn where the water hole is or a better way to hunt, we learn the names of all the actors who have played The Doctor, in chronological order (your discretion on whether to include Roland Atkinson and/or Peter Cushing!)
Trouble is, although "life" is not necessarily more complex today as compared to any previous century, there are a great many more specialities you can indulge in. And fields keep evolving. I know how to build mods for games that no-one plays anymore, and I have hard-won skills in software that I'll never run again. And skills with hardware and work-arounds that are mostly replaced by easier solutions.
In theater alone, I know how to construct an old-style canvass flat with glue and tack hammer, how to run an old carbon-arc follow-spot, and even how to lash flats and use stage braces. Do I really expect to need those skills again?
And, yeah, is is kinda fun to walk down the tool aisles of the local OSH going, "I know what that is, and that, and that, used to own one of those, still own one of those..."
Oh yeah. In true good-money-after-bad tradition, once you've learned a skill, you feel driven to keep it up. Heck, you feel this way even if it turns out you never were any good at it in the first place. You feel this compulsion anyhow to develop a completely useless and extraneous skill, because it is part of your self-image that you had that skill in the first place.
Which is why this week I've been trying to schedule classes in machining skills I've never had, reading up to improve and extend mixing skills I have, and running a ton of fabric through the Bernina developing sewing skills I thought I had (and turn out to, largely, not have had.) And bemoaning the lack of time to program, play ukelele, draw, write, and do any of the other hundreds of random skills I've picked up over the years.
Picked up three yards of a very nice looking heathered cotton-poly at just five bucks a yard for my next endeavor. It is a speckled grey that should be dark enough for work. I think I might need to look at a McCall's pattern next, though. I don't like either of the Simplicity trousers I have.
Also cleaned and oiled the Bernina today, and it is purring. Berninas are described by many as a noisier machine, but it like it. It sounds like Industry.
Isn't it the way, though? We humans are hard-wired to want to learn things. If we can't learn where the water hole is or a better way to hunt, we learn the names of all the actors who have played The Doctor, in chronological order (your discretion on whether to include Roland Atkinson and/or Peter Cushing!)
Trouble is, although "life" is not necessarily more complex today as compared to any previous century, there are a great many more specialities you can indulge in. And fields keep evolving. I know how to build mods for games that no-one plays anymore, and I have hard-won skills in software that I'll never run again. And skills with hardware and work-arounds that are mostly replaced by easier solutions.
In theater alone, I know how to construct an old-style canvass flat with glue and tack hammer, how to run an old carbon-arc follow-spot, and even how to lash flats and use stage braces. Do I really expect to need those skills again?
And, yeah, is is kinda fun to walk down the tool aisles of the local OSH going, "I know what that is, and that, and that, used to own one of those, still own one of those..."
Oh yeah. In true good-money-after-bad tradition, once you've learned a skill, you feel driven to keep it up. Heck, you feel this way even if it turns out you never were any good at it in the first place. You feel this compulsion anyhow to develop a completely useless and extraneous skill, because it is part of your self-image that you had that skill in the first place.
Which is why this week I've been trying to schedule classes in machining skills I've never had, reading up to improve and extend mixing skills I have, and running a ton of fabric through the Bernina developing sewing skills I thought I had (and turn out to, largely, not have had.) And bemoaning the lack of time to program, play ukelele, draw, write, and do any of the other hundreds of random skills I've picked up over the years.
Sunday, November 17, 2013
Taping Up Body Mics
Or, "Warts and Angler-Fish."
A lot of people have been asking, so I dedicate this post to it. Would do better with pictures, I know.
Cheek Mic (what some of my younger cast call the "you've got a parasitic infection" look.) If there is nothing unusual, like glasses, bushy sideburns, a hat, a mask to get around, this is where and how it goes;
Feel for the cheekbone -- the zygomatic. It starts at the hairline at roughly the lower margin of the eyes, and for the first centimeter or two makes a line that points towards the philtrum (the space between upper lip and nose). Jaw muscles originate just below this bony prominence; press a finger against your own cheek and make a chewing motion and you will feel how on the cheek, you have movement, but on top of the cheekbone, the flesh remains almost still.
Starting with the microphone under the shirt or blouse and coming up through the neck hole in back, pull the mic over the top of the ear and stretch it along the zygomatic -- just on top or slightly below in the notch. It should be along the same line as the bone, making a fairly straight line as it points to the margin of the upper lip. Avoid the temptation to angle it lower.
Pull the mic out until there is barely one width of tape between the head of the mic and the start of the hairline (aka the sideburns). Tape there. For younger cast I buy 1/2" tape or tear the 1" in half. For women and children, you can usually brush aside much of the stray hair in front of the ear to make sure you are not putting tape on top of hair.
So that's four things to watch out for; don't pull the mic out too far, don't tip the mic down or otherwise allow it to get on the soft part of the cheek, don't get tape on the head of the mic, and don't tape on top of hair (it is uncomfortable for the actor and doesn't stay on, anyhow).
On most actors, dress the mic behind the ear and tape once behind the ear; when the space is large and clear, actually behind the ear a bit above the lobe -- I've found a narrow strip of tape done at an angle works well -- and when the space is small or there is a lot of hair, just below the ear on the broad mass of the sternomastoid itself.
For actresses with lush hair (particularly girls) you can save them tape behind the ear and use a bobby pin or hair clip right where the hair tucks over the back of the ear.
The last piece of tape in the typical three-piece arrangement is on the back of the neck. I used to recommend low, around the 7th cervical vertebrae, but I've changed that now to a 3/4 position, along the mass of the trapezius and just above the "V" where shoulder line meets neck line.
Okay, I've given a bunch of exceptions here already, but really, for twenty actors you can go through 18 of them with the basic three pieces of tape, slap slap slap. I've done a cast of twenty myself in under fifteen minutes.
Hair Mic. What one of my younger cast called the "angler fish" look. Also when done wrong can look like a caste mark. Seriously, there's not enough sonic difference between just down of the hairline, and inside the hairline, to make it worth staring at a microphone all night.
The mic goes on the forehead. If the actor has hair with an off-center part, this may give you a better place to lead it, otherwise just go center. Tape just behind the head of the mic, and as close to hairline as you can get...if you have to tape. For most actors, it is better to pull the mic up until it is just barely peeking out, and secure it with bobby pins or hair clips.
Work the mic up along the top of the head and back, pinning as you go. The slowest to dress are actors in natural hair. With wigs, you either have a wig cap, or the actor's own hair in coils, and it is easy to pin to or weave the mic inside.
Particularly, girls with wigs or "trousers" roles will have the bulk of their hair pinned up in a bun or french roll. You can pull the mic through that and let it dangle in back. Then all you need to do is pin the length up to where it meets the hairline in front.
When the hair is not supportive of the fragile neck area, this will be a piece of tape.
Hair mics take longer, and take more experience and judgment in figuring out how best to deal with each individual actor. The trade-off is that they, of course, sound better.
Lapel Mic. Completely inappropriate for most live theater, but you may have to do it for a presenter or work a lecture or talk some time.
No tape. The mic goes into a clip, which clips to clothing. The trick is to get out of chin shadow; don't go on to a high collar. As a rule of thumb, feel for the top of the dagger-bone -- below the clavicular notch. Or the other rule of thumb...imagine the microphone is a little light, and it should touch the lips without the chin casting a shadow on them. In most cases it looks nicer on clothing to be to one side or the other, on the inside edge of the lapel on a sports coat or similar.
In the case of, say, a turtleneck sweater, make a judgment call about whether you'd prefer to be watching a puckered sweater with a mic attached in the middle of the fabric (the thicker, looser weave, and more colorful the sweater, the better this works), or listen to a poor voice from a position that is up too high.
A lot of people have been asking, so I dedicate this post to it. Would do better with pictures, I know.
Cheek Mic (what some of my younger cast call the "you've got a parasitic infection" look.) If there is nothing unusual, like glasses, bushy sideburns, a hat, a mask to get around, this is where and how it goes;
Feel for the cheekbone -- the zygomatic. It starts at the hairline at roughly the lower margin of the eyes, and for the first centimeter or two makes a line that points towards the philtrum (the space between upper lip and nose). Jaw muscles originate just below this bony prominence; press a finger against your own cheek and make a chewing motion and you will feel how on the cheek, you have movement, but on top of the cheekbone, the flesh remains almost still.
Starting with the microphone under the shirt or blouse and coming up through the neck hole in back, pull the mic over the top of the ear and stretch it along the zygomatic -- just on top or slightly below in the notch. It should be along the same line as the bone, making a fairly straight line as it points to the margin of the upper lip. Avoid the temptation to angle it lower.
Pull the mic out until there is barely one width of tape between the head of the mic and the start of the hairline (aka the sideburns). Tape there. For younger cast I buy 1/2" tape or tear the 1" in half. For women and children, you can usually brush aside much of the stray hair in front of the ear to make sure you are not putting tape on top of hair.
So that's four things to watch out for; don't pull the mic out too far, don't tip the mic down or otherwise allow it to get on the soft part of the cheek, don't get tape on the head of the mic, and don't tape on top of hair (it is uncomfortable for the actor and doesn't stay on, anyhow).
On most actors, dress the mic behind the ear and tape once behind the ear; when the space is large and clear, actually behind the ear a bit above the lobe -- I've found a narrow strip of tape done at an angle works well -- and when the space is small or there is a lot of hair, just below the ear on the broad mass of the sternomastoid itself.
For actresses with lush hair (particularly girls) you can save them tape behind the ear and use a bobby pin or hair clip right where the hair tucks over the back of the ear.
The last piece of tape in the typical three-piece arrangement is on the back of the neck. I used to recommend low, around the 7th cervical vertebrae, but I've changed that now to a 3/4 position, along the mass of the trapezius and just above the "V" where shoulder line meets neck line.
Okay, I've given a bunch of exceptions here already, but really, for twenty actors you can go through 18 of them with the basic three pieces of tape, slap slap slap. I've done a cast of twenty myself in under fifteen minutes.
Hair Mic. What one of my younger cast called the "angler fish" look. Also when done wrong can look like a caste mark. Seriously, there's not enough sonic difference between just down of the hairline, and inside the hairline, to make it worth staring at a microphone all night.
The mic goes on the forehead. If the actor has hair with an off-center part, this may give you a better place to lead it, otherwise just go center. Tape just behind the head of the mic, and as close to hairline as you can get...if you have to tape. For most actors, it is better to pull the mic up until it is just barely peeking out, and secure it with bobby pins or hair clips.
Work the mic up along the top of the head and back, pinning as you go. The slowest to dress are actors in natural hair. With wigs, you either have a wig cap, or the actor's own hair in coils, and it is easy to pin to or weave the mic inside.
Particularly, girls with wigs or "trousers" roles will have the bulk of their hair pinned up in a bun or french roll. You can pull the mic through that and let it dangle in back. Then all you need to do is pin the length up to where it meets the hairline in front.
When the hair is not supportive of the fragile neck area, this will be a piece of tape.
Hair mics take longer, and take more experience and judgment in figuring out how best to deal with each individual actor. The trade-off is that they, of course, sound better.
Lapel Mic. Completely inappropriate for most live theater, but you may have to do it for a presenter or work a lecture or talk some time.
No tape. The mic goes into a clip, which clips to clothing. The trick is to get out of chin shadow; don't go on to a high collar. As a rule of thumb, feel for the top of the dagger-bone -- below the clavicular notch. Or the other rule of thumb...imagine the microphone is a little light, and it should touch the lips without the chin casting a shadow on them. In most cases it looks nicer on clothing to be to one side or the other, on the inside edge of the lapel on a sports coat or similar.
In the case of, say, a turtleneck sweater, make a judgment call about whether you'd prefer to be watching a puckered sweater with a mic attached in the middle of the fabric (the thicker, looser weave, and more colorful the sweater, the better this works), or listen to a poor voice from a position that is up too high.
Friday, November 15, 2013
If you ain't picking seams, you ain't learning
I guess that means I'm learning.
This week has been my first serious project on the Bernina. A pair of pants. And, as it turns out, the scale-up is almost perfect here. I would have been over my head with a frock coat, and probably bored with another pillow case. On pants, I'm learning.
Learning, among other things, that when people say Simplicity patterns tend large (and their 1948 very much runs large), they aren't kidding. Using Simplicity's own mapping of pattern size to measurements, and a fresh set of measurements I took off my own body...I ended up with a waist about 4" too large!
Seriously, the things were clown pants. And isn't it always the way, that the seam you have to unpick is the seam you made right after you switched from "machine baste" setting to something tighter?
I don't have a good feel yet for whether this is a simpler pattern or a more complicated pattern for what it is. I do know I basically had to just build it end to end; I couldn't make head nor tail out of the instructions and the many, many pieces until I was actually stitching them together. And not always then, either -- pulled apart the pockets two or three times before I figured out how they were supposed to work.
Now that I understand this pattern, there's several things I'd do different. There's no reason to put interfacing in the fly, for instance, although the overlap could sure use some. And there's some of the basting and marking steps I could cut now. Biggest lesson so far, though? Measure your seam allowance. Having a clean seam allowance is just too critical to too many other stages to make it worth being sloppy cutting it.
Also discovered black is painful to work with. Finally gave up on the stupid tailor's chalk and switched to white grease pencil, which I could actually see. It is a heavy, relatively coarse-weave "tent canvas" I'm using that frays a lot and is basically a total pain. The bolt of fabric I carried to the front was a yard short, and this was my hasty second choice.
And my little travel iron actually puts out enough for fusible interfacing. I think I bought the thing back when I was in the Army. It goes back to at least 1986 -- but then, so does my coffee filter.
Since learning one new thing at a time has never been my way, I also took my first class at TechShop this week. I'm now permitted to use the cold saw...and more powerful versions of tools I have myself. Many more classes before I'll be able to mill any metal..especially if I want to CNC it.
This week has been my first serious project on the Bernina. A pair of pants. And, as it turns out, the scale-up is almost perfect here. I would have been over my head with a frock coat, and probably bored with another pillow case. On pants, I'm learning.
Learning, among other things, that when people say Simplicity patterns tend large (and their 1948 very much runs large), they aren't kidding. Using Simplicity's own mapping of pattern size to measurements, and a fresh set of measurements I took off my own body...I ended up with a waist about 4" too large!
Seriously, the things were clown pants. And isn't it always the way, that the seam you have to unpick is the seam you made right after you switched from "machine baste" setting to something tighter?
I don't have a good feel yet for whether this is a simpler pattern or a more complicated pattern for what it is. I do know I basically had to just build it end to end; I couldn't make head nor tail out of the instructions and the many, many pieces until I was actually stitching them together. And not always then, either -- pulled apart the pockets two or three times before I figured out how they were supposed to work.
Now that I understand this pattern, there's several things I'd do different. There's no reason to put interfacing in the fly, for instance, although the overlap could sure use some. And there's some of the basting and marking steps I could cut now. Biggest lesson so far, though? Measure your seam allowance. Having a clean seam allowance is just too critical to too many other stages to make it worth being sloppy cutting it.
Also discovered black is painful to work with. Finally gave up on the stupid tailor's chalk and switched to white grease pencil, which I could actually see. It is a heavy, relatively coarse-weave "tent canvas" I'm using that frays a lot and is basically a total pain. The bolt of fabric I carried to the front was a yard short, and this was my hasty second choice.
And my little travel iron actually puts out enough for fusible interfacing. I think I bought the thing back when I was in the Army. It goes back to at least 1986 -- but then, so does my coffee filter.
Since learning one new thing at a time has never been my way, I also took my first class at TechShop this week. I'm now permitted to use the cold saw...and more powerful versions of tools I have myself. Many more classes before I'll be able to mill any metal..especially if I want to CNC it.
Monday, November 11, 2013
The Problem of Backline Contamination
Sound levels are relative....to a point.
Within this simple phrase lies the reason of why backline contamination is such a huge problem for live sound in smaller venues.
First, consider the setting for which amplified sound was first introduced; the big open-air concert. Or, similar in affect but looking completely different, the studio session.
Everything that reaches ears comes through the mixing board. It is pretty much that simple. The musicians play and sing, a selection of microphones (and pickups) take those elements that are deemed essential to create the desired sound, those signals are processed to taste and mixed together, and the final result is broadcast from line arrays...or is compressed for streaming or cut into a master disk or whatever.
And perhaps this gives rise to a problematic philosophy. Sound engineers and designers tend to come from this world of control. There was a point reached in studio sessions when each musician was isolated in a sound-proof booth, unable to see the other players, unable to hear anything but what the engineers sent to his or her headphones. Thankfully, most studios have backed off from that, embracing the interaction and life -- and moving to a philosophy that treats the ensemble as the primary source and spot-mics only to bring out nuance in individual instruments.
But we still have this lovely illusion that, since sound is passing through the mixer, we should be able to control what is heard by the audience electronically. And this just plain isn't so.
In a small house, theatrically-trained actors are heard easily without amplification. So are singers...the only problem can be if the accompaniment is overpowering. Which it can be. Un-amplified, brass, drums, and even piano can be enough louder than even a trained voice to make the result unbalanced.
The problems become even greater in the medium-sized house. Through the range from club-like to 2,000 seats, a significant part of what reaches the audience's ears did not come through the sound system.
Levels are relative. It is as appropriate to say "The band is too loud" as it is to say "The singers are too soft." The problem is, there exists an apparently simple solution to the latter. So the approach in the majority of spaces is to try to deal with the problem by amplifying the singers -- usually via wireless microphones.
In the right situations, all that is required is gentle reinforcement. The microphones near-invisibly add a few more dB, and the singers rise above the accompaniment in a natural way. The experience is acoustic; the sound appears to come from the singer and interacts in a natural way with their surroundings, supporting a sense of reality.
The same measures can be taken when a band is not balancing with itself. In many cases the traps will overpower some of the reeds. And often as not there are keyboards, or electric bass, which don't make significant sound without electronics.
My preference is to treat a pit acoustically; for every instrument playing in the pit to be heard in the pit. Keyboard players have monitor speakers that are turned up enough for the other players to hear them. This allows the pit to adjust to each other and act like an ensemble.
This doesn't work so well for having to amplify some of the elements over others. And it confuses many people tremendously when you do something like mic a drum. Because a "drum" isn't an entity. It produces a variety of sounds that, to sound right and sound real, also have to balance with each other. In short, the drum is so loud you can't hear the drum. So I mic the drum to be able to hear it over the drum.
(Or more specifically, I mic to hear the nuances of the snare and the click of the hat -- sounds which get masked by the volume coming off the shells).
And this gives rise to the perception of panacea, in which every single note you will get from anyone in the production will be, "So and so's microphone needs to be louder." Always louder. Never trimming the competing elements. Never understanding that loud is relative, and that making the chorus softer is a better way to allow the solo to be heard.
Because sound is relative, to a point. The point being there are soft edges pushing up into concrete ceilings. As you raise levels, you approach feedback threshold. Far short of actual feedback, sounds will begin to take on an edgy, brittle shimmer, like they are being played through one of those tin-can-and-string telephones.
And you can push the feedback threshold back through judicious equalization. The problem being that you begin to cut into the sound you want.
Even if you avoid feedback, the room itself has acoustic properties. First you begin to drive the air of the room into resonance. Then all the materials in the room begin to vibrate in sympathy. All of these physical effects generate harmonics of their own. As you increase the level of Sound A higher and higher in the speakers in order to make it louder than Sound B, you also produce a Sound C; the room itself. The louder you go, the louder the room is, until all of these secondary sounds are as much competition as the original problem you were trying to solve.
Even in a perfect room, with a perfect system...say, if you gave each audience member a pair of personal headphones, physics still does not allow you to arbitrarily increase volume. Physics -- and biology. The human ear is non-linear, and begins to distort at higher sound pressures. The ear accommodates quickly; what was nice and loud two minutes ago sounds normal now, and ten minutes later begins to sound wimpy and soft. The ear in fact begins to shut down after sufficient exposure to higher levels of sound. First the high end rolls off, meaning everything sounds dull, then the perceived volume drops.
No-one ever wins in volume wars.
So what does this have to do with the backline?
The problem is simple. The leakage from the pit -- loud acoustic instruments like brass and drums, and the monitor levels of keys and bass -- is heard by the audience. As a mixer, you are trapped between two absolutes; the highest practical level you can amplify any sound, and the existing sound that is in competition.
Backline leakage is a problem in almost every way. First, it is sheer volume. Weak singers may not be able to be heard over the natural, un-amplified sound coming out of the pit. Second, it is unbalanced; the backline emphasizes certain instruments at the expense of others. Third, it has a poor spectrum.
This takes a little more explanation. Sound is semi-directional. For a given radiator, the pattern approaches omnidirectional as the frequency lowers. Frequency dependence also counts in reflection; given the scattered surfaces of a typical sunken orchestra pit, the higher frequency content bounces around and gets lost, with less of it escaping the pit. The lower-frequency content treats obstructions like a river treats a small rock; it flows around, and escapes the pit rather less attenuated.
This should be simple to understand. It ever boggles my mind why even many musical directors don't get it. The sound of a band on stage is like a friend across from you at a table. The sound of a band in a pit is like the sound of your friend on the other side of a door. And it isn't made better by asking the friend to talk louder!
This is why, for any situation but the smallest or most open, a pit band won't sound its best without a small amount of carefully selected amplification. Not to make them LOUD. But to make them CLEAR.
Given this, the amplified sound of the band is up against...the leakage from the pit. Just like trying to power up singers over the band via wireless microphones, you are trying to power up the "good" sound (the softer instruments, the nuances of specific instruments, the higher frequencies and other subtleties of performance) above the low-frequency, time-smeared, unfocused mush that makes up most of the backline leakage.
Again, this isn't something the band can do themselves. If you hit the drum louder, the "click" of the stick gets louder, but so does the "thooooummmp" of the shell. Because hearing is non-linear and increased volume can lead to increased resolution you will get a slightly more defined drum sound if you just increase the player's volume. But it isn't anywhere as nice as the amplified sound that selectively takes just one element of the sonic picture and presents it to the audience without any of the filters of the local geography between the drummer and the audience ears.
And bands, too, drive the rooms. The louder they play, the more the set walls, the other instruments, the air itself vibrates in sympathy. All these extraneous and distracting noises get louder and louder as well -- and in a non-linear fashion.
This is why backline leakage is the bane of sound techs in every medium-sized and smaller venue. In clubs, it is near-impossible to fix a band's sound via external electronics. If the guitarist insists on turning up his cab, then loud guitar will be all anyone hears -- the rest of the band might as well go home.
In the theater, in the pit, it isn't quite as dire. But the basic simplicity remains; if the band plays loud, if their monitors are loud, then the sound will suck.
Because the mixer is up against the concrete wall of sonic maximums. When the band is loud, it leaks into the very microphones that are on the singers. I've had plenty of shows in which bringing up the chorus was exactly as if you'd turned up the band 5-10 db. There are times when the drums are so loud they are -- quite literally -- louder in the singer's microphone than the singer is. You would get the singer "louder" (relatively, that is), only by turning them down.
To get the singers to sound decent you need to support them over the total sound of the band. To get the band to sound decent, you need to support them over the distracting leakage from the pit. And you have an absolute limit as to how hot you can run.
Really, it would be better if the band could be more controlled. But that is something that does not seem to happen.
The Loneliest Seat in the House
As a mixer for a musical you are a bit of an alien to the theater. All of the other jobs -- from dressers to follow-spot operators -- are well established in the history of theater but amplification and live sound mixing are still new to the trade. We are more from the world of live music, from concerts and clubs, then we are from the world of greasepaint and limelight.
And you are physically isolated. Which you share with the lighting tech, and often the Stage Manager -- but they have headsets linking them electronically to the rest of the production. In the long spaces between cues there is chatter on headset -- news and gossip from backstage, and the social grease of people working long hours together.
They also have a nice little booth to hide in; you are usually alone on the floor in full view of the audience.
Of course it goes without saying that the Stage Manager has the ultimate loneliness; the loneliness of Command (insert your favorite Captain Kirk scene here). Our responsibility is not as heavy, but it is no small weight in itself.
We are the final link between actors/musicians and the ears of the audience. Some times this makes you the mastering engineer; the person responsible with taking all that effort and heart that so many people put into the music and giving that final polish to make it the best it can be. Other times you are like the last driver with a clear chance to avoid the accident.
And you switch between these modes with blinding speed. At one moment, you will be gently riding a mic to put that last little bit into the crescendo of an emotional number. And then there is a screech of sound and in an instant you are in damage control mode, force to make a choice between multiple unpalatable alternatives...without any time for deliberation.
On a very good night, someone might give you an atta-boy for responding quickly to plug that popped out of a DI in the pit and subjected the audience to the growling buzz of unfiltered 60-cycle. On a very, very good night, you might get a compliment along the lines of, "We didn't hear any noise or popping this time." No matter how many problems you solve before the curtain opens, no matter how many prophylactic measures you take (like subjecting a poor actor to multiple mic changes just because you thought you heard something in their mic), no matter how quick and how effective you fixed, charted around, or otherwise ameliorated a problem, the only feedback you will ever get is on the ones that slip through.
And you are physically isolated. Which you share with the lighting tech, and often the Stage Manager -- but they have headsets linking them electronically to the rest of the production. In the long spaces between cues there is chatter on headset -- news and gossip from backstage, and the social grease of people working long hours together.
They also have a nice little booth to hide in; you are usually alone on the floor in full view of the audience.
Of course it goes without saying that the Stage Manager has the ultimate loneliness; the loneliness of Command (insert your favorite Captain Kirk scene here). Our responsibility is not as heavy, but it is no small weight in itself.
We are the final link between actors/musicians and the ears of the audience. Some times this makes you the mastering engineer; the person responsible with taking all that effort and heart that so many people put into the music and giving that final polish to make it the best it can be. Other times you are like the last driver with a clear chance to avoid the accident.
And you switch between these modes with blinding speed. At one moment, you will be gently riding a mic to put that last little bit into the crescendo of an emotional number. And then there is a screech of sound and in an instant you are in damage control mode, force to make a choice between multiple unpalatable alternatives...without any time for deliberation.
On a very good night, someone might give you an atta-boy for responding quickly to plug that popped out of a DI in the pit and subjected the audience to the growling buzz of unfiltered 60-cycle. On a very, very good night, you might get a compliment along the lines of, "We didn't hear any noise or popping this time." No matter how many problems you solve before the curtain opens, no matter how many prophylactic measures you take (like subjecting a poor actor to multiple mic changes just because you thought you heard something in their mic), no matter how quick and how effective you fixed, charted around, or otherwise ameliorated a problem, the only feedback you will ever get is on the ones that slip through.
Thursday, November 7, 2013
Here We Go a-Morrowing
So the V150 model is finished and in my Shapeways store.
Here's how it looks with a coat of paint and a few bits of additional dressing:
More notes on scale; these are old Morrow Project miniatures from the 90's, thus the Ral Partha Dwarf proportions. Technically 28mm, and as you can see, they seem roughly proportional with a vehicle in 1/56 scale. At least, it is as close as I could get to 1/56 by working with the quoted length of the hull, from the blueprints I had available.
To recap the scale process: I scanned blueprint images and cropped and scaled them to be square and dimensional to each other. I took the pixel length of the largest scaled item that appeared in any one drawing and extrapolated the real-world dimensions of the blueprint space.
Within Carrara, I set the working box to the size of the blueprint space; this meant that if the model I was building was lined up accurately on the vehicle in the drawing, it would be the correct real-world size. This worked out, to within a small degree of error (a fraction of one percent error).
The two biggest problems I had within Carrara were, first, that I was working metric while most of the dimensional information was in feet and inches. So a lot of multiplying by 2.54 to get the right units into the modeler. The other is that Carrara, stupidly, only displays two digits to the right of the decimal point. This means that a vehicle sitting within a ten-meter working box can not have any numerical measurement that is smaller than 10 centimeters.
Which is ridiculous! Any of the detailed parts, then, could only be lined up by eye against a grid (which could be set finer than 10 cm). Once again, it is really stupid software for anyone doing a model more elaborate than the Linux penguin.
The drawback of the method is that when I moved into checking for printability I had to divide by 56 all the time to find out what the print size of various parts was going to be. Finally I just reset the grid to be at 1 mm in the final print size of the model (aka 56 mm in world scale), and eyeballed everything to make sure I was staying within the Design Rules.
Since I knew the longest dimension of the completed model in real-world scale, all I had to do is divide by 56 to figure out what the size of the scaled mesh should be. The actual export from Carrara was at arbitrary scale (Carrara doesn't do scaled obj format). But all I had to do is type the correct longest dimension in the scale box in Hexagon 2.5, and the stl exported from there was correctly scaled for the Shapeways printers.
The last scale trick was to line up all critical-fit parts the same way they would be when assembled (as the printers aren't always the same degree of accuracy in x, y, and z axis), and export them together (to make sure they are all scaled the same ratio and will fit properly after printing). In this case, I attached the different parts together with sprue to make it easier for the lads and lasses at Shapeways to handle what otherwise might be small, fragile parts.
Here's how it looks with a coat of paint and a few bits of additional dressing:
More notes on scale; these are old Morrow Project miniatures from the 90's, thus the Ral Partha Dwarf proportions. Technically 28mm, and as you can see, they seem roughly proportional with a vehicle in 1/56 scale. At least, it is as close as I could get to 1/56 by working with the quoted length of the hull, from the blueprints I had available.
To recap the scale process: I scanned blueprint images and cropped and scaled them to be square and dimensional to each other. I took the pixel length of the largest scaled item that appeared in any one drawing and extrapolated the real-world dimensions of the blueprint space.
Within Carrara, I set the working box to the size of the blueprint space; this meant that if the model I was building was lined up accurately on the vehicle in the drawing, it would be the correct real-world size. This worked out, to within a small degree of error (a fraction of one percent error).
The two biggest problems I had within Carrara were, first, that I was working metric while most of the dimensional information was in feet and inches. So a lot of multiplying by 2.54 to get the right units into the modeler. The other is that Carrara, stupidly, only displays two digits to the right of the decimal point. This means that a vehicle sitting within a ten-meter working box can not have any numerical measurement that is smaller than 10 centimeters.
Which is ridiculous! Any of the detailed parts, then, could only be lined up by eye against a grid (which could be set finer than 10 cm). Once again, it is really stupid software for anyone doing a model more elaborate than the Linux penguin.
The drawback of the method is that when I moved into checking for printability I had to divide by 56 all the time to find out what the print size of various parts was going to be. Finally I just reset the grid to be at 1 mm in the final print size of the model (aka 56 mm in world scale), and eyeballed everything to make sure I was staying within the Design Rules.
Since I knew the longest dimension of the completed model in real-world scale, all I had to do is divide by 56 to figure out what the size of the scaled mesh should be. The actual export from Carrara was at arbitrary scale (Carrara doesn't do scaled obj format). But all I had to do is type the correct longest dimension in the scale box in Hexagon 2.5, and the stl exported from there was correctly scaled for the Shapeways printers.
The last scale trick was to line up all critical-fit parts the same way they would be when assembled (as the printers aren't always the same degree of accuracy in x, y, and z axis), and export them together (to make sure they are all scaled the same ratio and will fit properly after printing). In this case, I attached the different parts together with sprue to make it easier for the lads and lasses at Shapeways to handle what otherwise might be small, fragile parts.
Monday, November 4, 2013
Eating Like a Horse...
....a cup of rolled oats and an apple. That was breakfast one day of tech. Lunch was a Cliff Bar. Dinner was a little better; two servings of salad and the last Top Ramen (actually, Ichiban.)
At that time, out of the last five people who had promised me a check, none had paid up. The show I had just finished working reneged on the stipend the outgoing Production Manager had promised in his original email, and on the extra hourly the outgoing Artistic Director had promised verbally after I spent the first week of my "design" tracing, testing, and replacing wires all over the building.
That said, the next two checks are not late, per se. One is rental for a run that was extended an extra week or two. The other is a gig that sometimes pays on the day, and sometimes pays a few weeks later when the school gets around to dealing with it. Unfortunately on that gig (the East Bay Mini Makers Faire) I had to hire an assistant and I paid HIM. Out of my own pocket.
There is no petty cash drawer at the company where I'm currently in production. I spent money on microphone parts out of pocket under the understanding that I would be reimbursed promptly. Two days ago, they finally coughed up the check. Which was one day too late to save me from seventy dollars worth of overdraft charges.
Saturday I had in hand the check for designing the show. Fortunately I calculate on a per diem, not on an hourly, but $425 for two weeks work is still pretty damned shy. Calculating by hourly....two weeks of 10-14 hour days makes for a base pay rate somewhere around $4 an hour. And to add the last insult to that injury, the check of course was handed to me during a twelve hour day at the theater. By the time I could make it to the bank, I'd been hit with yet another fee (this time from my gym).
At this company, mixing the show is treated separately, and gets paid a very decent stipend. On closing night. So there will be a whole lot of rolled oats in the weeks to come.
Oddly enough, I felt pretty good for most of the week. I guess I needed the diet.
At that time, out of the last five people who had promised me a check, none had paid up. The show I had just finished working reneged on the stipend the outgoing Production Manager had promised in his original email, and on the extra hourly the outgoing Artistic Director had promised verbally after I spent the first week of my "design" tracing, testing, and replacing wires all over the building.
That said, the next two checks are not late, per se. One is rental for a run that was extended an extra week or two. The other is a gig that sometimes pays on the day, and sometimes pays a few weeks later when the school gets around to dealing with it. Unfortunately on that gig (the East Bay Mini Makers Faire) I had to hire an assistant and I paid HIM. Out of my own pocket.
There is no petty cash drawer at the company where I'm currently in production. I spent money on microphone parts out of pocket under the understanding that I would be reimbursed promptly. Two days ago, they finally coughed up the check. Which was one day too late to save me from seventy dollars worth of overdraft charges.
Saturday I had in hand the check for designing the show. Fortunately I calculate on a per diem, not on an hourly, but $425 for two weeks work is still pretty damned shy. Calculating by hourly....two weeks of 10-14 hour days makes for a base pay rate somewhere around $4 an hour. And to add the last insult to that injury, the check of course was handed to me during a twelve hour day at the theater. By the time I could make it to the bank, I'd been hit with yet another fee (this time from my gym).
At this company, mixing the show is treated separately, and gets paid a very decent stipend. On closing night. So there will be a whole lot of rolled oats in the weeks to come.
Oddly enough, I felt pretty good for most of the week. I guess I needed the diet.
It's Like Another World
I just opened and am in middle of mixing "A Little Princess," the 2004 musical developed at Theaterworks with hopes of making it to Broadway. My personal feeling is it will be a while before that happens. Twin story-lines and a surfeit of (sometimes unfortunately similar) songs cloud the underlying development and emotional arcs; what it feels like too often is a mere string of scenes, with no particular reason as to why one scene follows on another. The current production is colorful and energetic, at least, so it is a decent night's entertainment.
(Image -- courtesy of TBA -- has nothing to do with this production, but at least is a place where I have worked.)
Technically the show has many challenges. I'm still struggling to define the "sound" for the show, which is being unveiled only slowly as we finish solving issues of monitors, band balance, off-stage singer placement, and RF interference. I'll get into those, and lessons learned: but probably in another post. For the moment I'll say only that this company, like many theater companies, has trouble accommodating the "music" part of "musical." Music is constrained and degraded by choices across the productions, from poor pit placement to limited rehearsal time.
(And at this company in particular, FOH is thought of as a trade, not an art. It is considered as something that could be done by the numbers by anyone with nimble fingers and sufficiently detailed notes from the director. The kinds of real-time artistic choice (and compromise) you have to make whilst flying the desk in front of an audience...well, even conceiving the world in which this is part of the job seems beyond their reach.)
On the Effects Design side, as a passing note this is the most synth-free show I've done yet. The only sound of purely electronic origin that appears is the "sound" of the hot desert sun just before "Timbuktu Delirium." All the magical spot effects are instrumental samples (and a wind sample); rainstick, bamboo rattle...and an mbira brought back from Tanzania and played by my own clumsy thumbs.
But it is specifically effects design I am thinking hard about right now. I want to split the position again. I did three or four seasons with a co-designer; I engineered and set the "sound" of the show, he designed -- created and spotted and fine-tuned -- the effects.
I enjoy creating sounds. I enjoy it very much, and it is one of the things that brought me into theater in the first place. But I have some minor skill as a sound engineer and FOH, and that is a rarer skill in this environment. We can find someone else to create sound effects easier than we can find someone else to engineer and mix the show.
(Actually, I think it might be best for this particular company if I left completely. Because maybe someone younger and better able to express themselves would be able to break down some of the institutional barriers and move sound in that theater up to the next level.)
(The risk, as in many such technical artistic fields, is that it would be just as likely for them to find someone without the appropriate skills, and for sound to suck in such a way that it drives audiences away and drives talent away without anyone involved being able to specifically articulate that it is because the sound sucks.)
(There's a common argument made; that some elements of technical art -- color balance in lighting, system EQ in sound, period accuracy in architectural details -- are, "Stuff only you experts notice." That most of the audience will be just as happy with crap or wrong crap. I strenuously disagree.)
(If you put a dress of the wrong period on stage, no audience member will leap to their feat and say, "That bustle is 1889, not 1848!" But they will have -- even a majority of the audience will have -- a slight uncomfortable feeling, an itch they can't scratch, a strange sound coming from an empty room; a sense that Something is Not Right. And it will make their experience less than it could be. They may never write on the back of the feedback card, "The reverb tails were too long and disrupted some of the harmonies," but they will write things like, "The music could have been better.")
(Many audience members, and a disheartening number of production staff and management, have no idea of 9/10ths of what my board does. But when it all works correctly, the tech-weenie stuff us FOH talk between each other our own indecipherable tongue brings out results that are easy to put in plain language and easily heard by most ears; sound that is "pleasing, well-balanced, audible, clean dialog, exciting, full, etc. etc.")
But back to the subject.
Thing is, on a straight play the Sound Designer is almost entirely concerned with Effects. They can sit in rehearsal with a notebook, spotting sounds and transitions and underscores, taking timing notes, even recording bits of dialog or action in order to time the effect properly. During tech, they are out in the audience area where they can hear how the sound plays; and relay those discoveries back to the electricians and board operators as to volume and placement.
On a musical, you are also trying to deal with the band, their reinforcement, monitor needs for band and actors, and of course those dratted wireless microphones. And far too many of the effects are going to happen when there are already a dozen things happening that demand your attention.
In my current house, two other factors make the job nearly impossible. The first is that due to budget we have carved down from up to four people on the job (Mic Wrangler, Board Mixer, Designer, and Sound Assistant -- during the load-in only), to....one.
I am repairing the microphones, personally taping them on actors and running fresh batteries back stage, installing all of the speakers and microphones and other gear, tuning the house system, helping the band set up, mixing the band, mixing the actors...and also all the stuff that has to do with effects.
The other factor at the current house is short tech weeks and a very....er...flexible...approach to creativity. We feel it is important to celebrate and sustain all those flashes of inspiration that come even in the middle of a busy tech with only hours left before the first audience arrives.
In other houses, we go into lock down earlier. Only when an idea is clearly not working do we stop and swap out -- and even then, it is understood by all parties that this will have a serious impact on every department and thus is not undertaken lightly.
When scenes are being re-blocked up to minutes before the doors open for the opening night audience, the idea of being able to set an effect early in tech, stick it in the book, and not have to come back to it, well...
This can be done. I built my first shows on reel-to-reel decks, bouncing tracks multiple times to build up layered effects. Modern technology means we can be very, very nimble. But it is getting increasingly difficult to be this nimble on top the musical needs of the show. This is why I want to split the job.
Two of the technical tools I've been relying on more of late are within QLab. One is the ability to assign hot keys. This way, even if I've incorrectly spotted the places a recurrent effect has to happen, I can still catch it on the fly by hitting the appropriate key on the laptop during performance.
The other is groups and sound groups.
Most effects you create will have multiple layers. Something as simple as a gunshot will be "sweetened" with something for a beefier impact (a kick drum sample works nice for some), and a bit of a reverb tail.
But just as relative frequency sensitivity is dependent on volume, dynamic resolution is also volume-dependent. And with the human ear, these are time-mapped and spacial-mapped as well; the ear that has most recently heard a loud, high-pitched sound will hear the next sound that comes along as low and muffled.
Which means that no matter how much time you spend in studio, or even in the theater during tech setting the relative levels of the different elements of your compound sound, when you hit the full production with amplified singers and pit and the shuffling of actor feet (and the not-inconsiderable effect of meat-in-seats when the preview audience is added to the mix!) your balance will fall apart. The gun will sound like it is all drum. Or too thin. Or like it was fired in a cave. Or completely dry.
So instead, you split out the layers of the sound as stems and play them all together in QLab. Grouped like this, a single fade or stop cue will work on all the cues within the group. And you can adjust the level of the different elements without having to go back into ProTools (or whatever the DAW of your choice is).
This also of course gives flexibility for those inevitable last-minute Director's notes ("Could the sound be two seconds shorter, and is that a dog barking? I don't like the dog.")
(How I construct these, is first I get the complete sound to sound right, preferably on the actual speakers. Then I selectively mute groups of tracks and render each stem individually. Placed in a group and played back together, the original mix is reconstructed. The art comes, of course, in figuring which elements need to have discrete stems!)
(QLab will play "close enough" to synchronized when the files are short enough. For longer files, consider rendering as multi-channel WAV files and adjusting the playback volumes on a per-channel basis instead. I tried this out during "Suessical!" but it didn't seem worth the bother for most effects.)
Thing of it is, though...
My conception of sound design is one of total design. Even though it takes the pressure off to have a sound assistant running sound cues (or on simpler shows, the Stage Manager), I consider it part of the total sound picture of the show. To me, the background winds in this show are as much a part of the musical mix as is the high-hat. As a minimum, I'm tracking effects through the board so I have moment-by-moment control of their level and how they sit in the final mix. It is sort of a poor man's, done-live rendition of the dubbing mix stage of a movie.
The avenue that continues to beckon has been the idea that, somehow, I could do the bulk of the work offline and before we hit the hectic days of tech. This has always before foundered on not having the props, the blocking, or any of the parts I usually depend on to tell me what a sound should actually be (and to discover the all-important timing.)
Since this won't move, it is possible that what I have to explore is more generic kinds of sounds. I've been using QLab and other forms of non-linear playback for a while to make it possible to "breathe" the timing. Perhaps I can explore more taking development out of the effects and building them instead into the playback.
Except, of course, that is largely just moving the problem; creating a situation where I need the time to note and adjust the cue'ing of build sound effects, as opposed to doing the same adjustment on the audio files themselves. And in the pressure of a show like the one I just opened, I don't even have the opportunity to scribble a note about a necessary change in timing or volume!
And the more the "beats" of an effect sequence are manually triggered, the more I need that second operator in order to work them whilst still mixing the rest of the show. There's one sequence in this show -- the nearly-exploding boiler -- that has eight cues playing in a little over one script page. There are already several moments in this show where I simply can not reach the "go" button for a sound cue at the same time I need to bring up the chorus mics.
Perhaps the best avenue to explore, then, is generic cues; sounds so blah and non-specific they can be played at the wrong times and the wrong volumes without hurting the show! Which is the best argument for synth-based sounds I know...
(The other alternative is to make it the band's problem. But they are already juggling multiple keyboards and percussion toys and several laptop computers of their own and even a full pedal-board and unless it appears in the score, they are not going to do a sound effect!)
(Image -- courtesy of TBA -- has nothing to do with this production, but at least is a place where I have worked.)
Technically the show has many challenges. I'm still struggling to define the "sound" for the show, which is being unveiled only slowly as we finish solving issues of monitors, band balance, off-stage singer placement, and RF interference. I'll get into those, and lessons learned: but probably in another post. For the moment I'll say only that this company, like many theater companies, has trouble accommodating the "music" part of "musical." Music is constrained and degraded by choices across the productions, from poor pit placement to limited rehearsal time.
(And at this company in particular, FOH is thought of as a trade, not an art. It is considered as something that could be done by the numbers by anyone with nimble fingers and sufficiently detailed notes from the director. The kinds of real-time artistic choice (and compromise) you have to make whilst flying the desk in front of an audience...well, even conceiving the world in which this is part of the job seems beyond their reach.)
On the Effects Design side, as a passing note this is the most synth-free show I've done yet. The only sound of purely electronic origin that appears is the "sound" of the hot desert sun just before "Timbuktu Delirium." All the magical spot effects are instrumental samples (and a wind sample); rainstick, bamboo rattle...and an mbira brought back from Tanzania and played by my own clumsy thumbs.
But it is specifically effects design I am thinking hard about right now. I want to split the position again. I did three or four seasons with a co-designer; I engineered and set the "sound" of the show, he designed -- created and spotted and fine-tuned -- the effects.
I enjoy creating sounds. I enjoy it very much, and it is one of the things that brought me into theater in the first place. But I have some minor skill as a sound engineer and FOH, and that is a rarer skill in this environment. We can find someone else to create sound effects easier than we can find someone else to engineer and mix the show.
(Actually, I think it might be best for this particular company if I left completely. Because maybe someone younger and better able to express themselves would be able to break down some of the institutional barriers and move sound in that theater up to the next level.)
(The risk, as in many such technical artistic fields, is that it would be just as likely for them to find someone without the appropriate skills, and for sound to suck in such a way that it drives audiences away and drives talent away without anyone involved being able to specifically articulate that it is because the sound sucks.)
(There's a common argument made; that some elements of technical art -- color balance in lighting, system EQ in sound, period accuracy in architectural details -- are, "Stuff only you experts notice." That most of the audience will be just as happy with crap or wrong crap. I strenuously disagree.)
(If you put a dress of the wrong period on stage, no audience member will leap to their feat and say, "That bustle is 1889, not 1848!" But they will have -- even a majority of the audience will have -- a slight uncomfortable feeling, an itch they can't scratch, a strange sound coming from an empty room; a sense that Something is Not Right. And it will make their experience less than it could be. They may never write on the back of the feedback card, "The reverb tails were too long and disrupted some of the harmonies," but they will write things like, "The music could have been better.")
(Many audience members, and a disheartening number of production staff and management, have no idea of 9/10ths of what my board does. But when it all works correctly, the tech-weenie stuff us FOH talk between each other our own indecipherable tongue brings out results that are easy to put in plain language and easily heard by most ears; sound that is "pleasing, well-balanced, audible, clean dialog, exciting, full, etc. etc.")
But back to the subject.
Thing is, on a straight play the Sound Designer is almost entirely concerned with Effects. They can sit in rehearsal with a notebook, spotting sounds and transitions and underscores, taking timing notes, even recording bits of dialog or action in order to time the effect properly. During tech, they are out in the audience area where they can hear how the sound plays; and relay those discoveries back to the electricians and board operators as to volume and placement.
On a musical, you are also trying to deal with the band, their reinforcement, monitor needs for band and actors, and of course those dratted wireless microphones. And far too many of the effects are going to happen when there are already a dozen things happening that demand your attention.
In my current house, two other factors make the job nearly impossible. The first is that due to budget we have carved down from up to four people on the job (Mic Wrangler, Board Mixer, Designer, and Sound Assistant -- during the load-in only), to....one.
I am repairing the microphones, personally taping them on actors and running fresh batteries back stage, installing all of the speakers and microphones and other gear, tuning the house system, helping the band set up, mixing the band, mixing the actors...and also all the stuff that has to do with effects.
The other factor at the current house is short tech weeks and a very....er...flexible...approach to creativity. We feel it is important to celebrate and sustain all those flashes of inspiration that come even in the middle of a busy tech with only hours left before the first audience arrives.
In other houses, we go into lock down earlier. Only when an idea is clearly not working do we stop and swap out -- and even then, it is understood by all parties that this will have a serious impact on every department and thus is not undertaken lightly.
When scenes are being re-blocked up to minutes before the doors open for the opening night audience, the idea of being able to set an effect early in tech, stick it in the book, and not have to come back to it, well...
This can be done. I built my first shows on reel-to-reel decks, bouncing tracks multiple times to build up layered effects. Modern technology means we can be very, very nimble. But it is getting increasingly difficult to be this nimble on top the musical needs of the show. This is why I want to split the job.
Two of the technical tools I've been relying on more of late are within QLab. One is the ability to assign hot keys. This way, even if I've incorrectly spotted the places a recurrent effect has to happen, I can still catch it on the fly by hitting the appropriate key on the laptop during performance.
The other is groups and sound groups.
Most effects you create will have multiple layers. Something as simple as a gunshot will be "sweetened" with something for a beefier impact (a kick drum sample works nice for some), and a bit of a reverb tail.
But just as relative frequency sensitivity is dependent on volume, dynamic resolution is also volume-dependent. And with the human ear, these are time-mapped and spacial-mapped as well; the ear that has most recently heard a loud, high-pitched sound will hear the next sound that comes along as low and muffled.
Which means that no matter how much time you spend in studio, or even in the theater during tech setting the relative levels of the different elements of your compound sound, when you hit the full production with amplified singers and pit and the shuffling of actor feet (and the not-inconsiderable effect of meat-in-seats when the preview audience is added to the mix!) your balance will fall apart. The gun will sound like it is all drum. Or too thin. Or like it was fired in a cave. Or completely dry.
So instead, you split out the layers of the sound as stems and play them all together in QLab. Grouped like this, a single fade or stop cue will work on all the cues within the group. And you can adjust the level of the different elements without having to go back into ProTools (or whatever the DAW of your choice is).
This also of course gives flexibility for those inevitable last-minute Director's notes ("Could the sound be two seconds shorter, and is that a dog barking? I don't like the dog.")
(How I construct these, is first I get the complete sound to sound right, preferably on the actual speakers. Then I selectively mute groups of tracks and render each stem individually. Placed in a group and played back together, the original mix is reconstructed. The art comes, of course, in figuring which elements need to have discrete stems!)
(QLab will play "close enough" to synchronized when the files are short enough. For longer files, consider rendering as multi-channel WAV files and adjusting the playback volumes on a per-channel basis instead. I tried this out during "Suessical!" but it didn't seem worth the bother for most effects.)
Thing of it is, though...
My conception of sound design is one of total design. Even though it takes the pressure off to have a sound assistant running sound cues (or on simpler shows, the Stage Manager), I consider it part of the total sound picture of the show. To me, the background winds in this show are as much a part of the musical mix as is the high-hat. As a minimum, I'm tracking effects through the board so I have moment-by-moment control of their level and how they sit in the final mix. It is sort of a poor man's, done-live rendition of the dubbing mix stage of a movie.
The avenue that continues to beckon has been the idea that, somehow, I could do the bulk of the work offline and before we hit the hectic days of tech. This has always before foundered on not having the props, the blocking, or any of the parts I usually depend on to tell me what a sound should actually be (and to discover the all-important timing.)
Since this won't move, it is possible that what I have to explore is more generic kinds of sounds. I've been using QLab and other forms of non-linear playback for a while to make it possible to "breathe" the timing. Perhaps I can explore more taking development out of the effects and building them instead into the playback.
Except, of course, that is largely just moving the problem; creating a situation where I need the time to note and adjust the cue'ing of build sound effects, as opposed to doing the same adjustment on the audio files themselves. And in the pressure of a show like the one I just opened, I don't even have the opportunity to scribble a note about a necessary change in timing or volume!
And the more the "beats" of an effect sequence are manually triggered, the more I need that second operator in order to work them whilst still mixing the rest of the show. There's one sequence in this show -- the nearly-exploding boiler -- that has eight cues playing in a little over one script page. There are already several moments in this show where I simply can not reach the "go" button for a sound cue at the same time I need to bring up the chorus mics.
Perhaps the best avenue to explore, then, is generic cues; sounds so blah and non-specific they can be played at the wrong times and the wrong volumes without hurting the show! Which is the best argument for synth-based sounds I know...
(The other alternative is to make it the band's problem. But they are already juggling multiple keyboards and percussion toys and several laptop computers of their own and even a full pedal-board and unless it appears in the score, they are not going to do a sound effect!)