I'm writing this on a MacBook Pro, which is set up in front of (and partially blocking) the "Gigabit Ethernet" (aka old G4) tower I'm trying to phase out. Unfortunately the latter machine still has most of my password chains and bookmarks on it, as well as some applications that won't run on the newer machines. To my right is a white iBook set up as a CD player, playing out of some cheap Radio Shack speakers I picked up for spot sound effects playback on "Drowsy Chaperone." Playing Yuko Hara now, but on the stack of CDs (MAN IN CHAIR: "Yes, CDs!") are several that I transferred from cassette tape. And at least one of those was in turn taped off a record! (Yes -- I hold on to music a long time).
I was doing this transfer process on the Al Powerbook, which is the one I usually leave at the theater for QLab playback (I'll likely be loaning it out next week for just that). Unfortunately it has a bad optical drive and the Al is even worse to take apart than the Ti. And I have to save the optical drive on the Ti, because that George Washington's Hatchet of a machine (every part on it has been replaced at least once: other than that, it is the same Powerbook I've had for ten years) contains my only full working directory of sound effects and spot music.
At least I've finally retired the Wallstreet (G3 Powerbook), which I was keeping around as the only machine that could talk to my old 11x15" scanner. And the Kaypro IIx is only staying around as a conversation piece, it being the first computer I ever owned (back in 1985).
Now, if you started counting AVRs, I have a dozen micros wandering around that I've written software for and that do various small useful things...
Tricks of the trade, discussion of design principles, and musings and rants about theater from a working theater technician/designer.
Saturday, December 28, 2013
Tuesday, December 24, 2013
Will it Blend?
Downtime between shows. Which means spending as much of it as possible with paying work. I've got some installation work at one theater, maintenance stuff for another. With a little props work for a friend on the side.
And I'd like to put some more Poser content in my online store. I need to model more efficiently. Knowing how to make a contiguous welded mesh was good training for the kind of models that are required for 3d printing, and they do render more nicely, but I need to use prims and detached planes more. If nothing else, those are easier to UV map.
I'm also thinking using less geometry, more texture mapping. Doing it all in the geometry is show-off stuff but there isn't that much need in the Poserverse for "Hero" props (borrowing a term from the film business, aka, the highly-detailed and fully-functional prop used for close-up shots).
And more than anything else, dump Carrara. So I'm pushing to get over the hump of the learning curve now and start working in Blender instead. It has a tool set I'd gladly pay for: but price is not the primary reason to go with shareware. The reason is that shareware is based on being open. Full communication, sharing. Commercial software is in the business of hide the flaws (in fact, actively stifle criticism), sell the flash (saddling otherwise plausible software with flashy tricks designed only to pull in new customers), and hooking the fish; keep selling upgrades, keep promising bug fixes, and of course keep the file formats proprietary so the customer faces losing all their own work if they try to switch.
Here's the dialog for bug fixes on Carrara:
Unfortunately, 3d software is the home dimension of alien GUI; each application is written as if by someone who never saw a computer program before, and each is utterly different. So the learning curve is huge. And us Mac users are on the bad part of the curve these days, since the gamer force behind modern PCs has made triple-button scroll mice the default there (and a grudging add-on to Macs). And you need all those buttons to navigate smoothly whilst manipulating objects in 3d space.
My own set of compromises is a two-button track-ball and various levels of keyboard re-mapping (sometimes through third-party aps). I'm tempted to add one of the USB-native family of AVRs to the mix and create my own third and fourth and fifth button. Fortunately, Blender is one of the more accommodating 3d aps out there to what they consider non-standard pointing devices (aka, anything other than a Windows mouse).
And I'd like to put some more Poser content in my online store. I need to model more efficiently. Knowing how to make a contiguous welded mesh was good training for the kind of models that are required for 3d printing, and they do render more nicely, but I need to use prims and detached planes more. If nothing else, those are easier to UV map.
I'm also thinking using less geometry, more texture mapping. Doing it all in the geometry is show-off stuff but there isn't that much need in the Poserverse for "Hero" props (borrowing a term from the film business, aka, the highly-detailed and fully-functional prop used for close-up shots).
And more than anything else, dump Carrara. So I'm pushing to get over the hump of the learning curve now and start working in Blender instead. It has a tool set I'd gladly pay for: but price is not the primary reason to go with shareware. The reason is that shareware is based on being open. Full communication, sharing. Commercial software is in the business of hide the flaws (in fact, actively stifle criticism), sell the flash (saddling otherwise plausible software with flashy tricks designed only to pull in new customers), and hooking the fish; keep selling upgrades, keep promising bug fixes, and of course keep the file formats proprietary so the customer faces losing all their own work if they try to switch.
Here's the dialog for bug fixes on Carrara:
"I've got a bug to report."
"Which version?"
"6.0"
"We aren't supporting 6.0 anymore. Buy the upgrade to 7.0 -- 20% off of what you'd pay for the full version -- and we'll talk."
"Okay. I just upgraded. Bug is still there."
"Well, don't expect us to fix it in 7.0; that's already released. Look for it in the 7.5 release."
"Is that a free patch?"
"Of course not! We're selling it as if it was a full version number. $500 if you own 7.0, $500 if you own 6.0, $500 if you own 5.0, and $550 if you never owned the software before."
"Okay...I waited for the 7.5 and installed it. My bug is still there."
"What bug?"
"The bug I filed back in 6.0!"
"We don't keep records of old bugs. We've made a whole bunch of changes to parts of the software that probably have nothing to do with that bug, but who knows? So we're starting from scratch with bugs filed on the 7.0"
"Which will be addressed in...?
"8.0, of course! What would you expect? So can we expect a check from you? Pre-order is only $800, or $780 if you never owned any software from us before, because we're always trying to attract new suckers."
Unfortunately, 3d software is the home dimension of alien GUI; each application is written as if by someone who never saw a computer program before, and each is utterly different. So the learning curve is huge. And us Mac users are on the bad part of the curve these days, since the gamer force behind modern PCs has made triple-button scroll mice the default there (and a grudging add-on to Macs). And you need all those buttons to navigate smoothly whilst manipulating objects in 3d space.
My own set of compromises is a two-button track-ball and various levels of keyboard re-mapping (sometimes through third-party aps). I'm tempted to add one of the USB-native family of AVRs to the mix and create my own third and fourth and fifth button. Fortunately, Blender is one of the more accommodating 3d aps out there to what they consider non-standard pointing devices (aka, anything other than a Windows mouse).
Saturday, December 21, 2013
Organic Limiter
I was just listening and comparing some of the pit mixes I've done and I realized a couple of things.
First was that the "Princess" sound was terribly muddy. Of course, what is available for later listening, and what the audience heard, are different animals, but there's a blurring and a heaviness that I don't think is just an artifact of that particular set of recordings.
In a way it was a given. I had several musicians who were all over the map dynamically. Neither the violin nor the trumpet could make up their minds how far they would be from the mic, either. The only option I had to control all this wildness was excess compression, which takes the bite out of the tone and smothers the articulation and brings up all the unwanted noise higher in the mix -- exacerbated by the loud playing, meaning there was tremendous leakage from one instrument to the microphones on another.
That introduces comb filtering and time smear, again destroying both the tone and the articulation. Once again, a loud pit ends up sounding less good in the end.
But the other realization I had is why running hot gives the illusion of being a solution to mixing ills. The reason is that the ears function as an organic limiter. When you listen at a level above the threshold of comfort (say, if you mix on headphones and insist on having it loud), your ears are unable to follow the peaks. They don't pass the complete sound, and they even shut down slightly in response. But as long as there is sufficient recovery time, the soft parts will still be audible.
So the artifact of mixing with too-hot monitor or 'phones levels is the loud parts are too loud in the resulting mix, and the soft parts get lost. Because your ears have compensated them in. (It will also often have too little bass and top end, again because of those various non-linear responses of the human ear). Typical song structure will start soft, and build, and never quite return to the original level. Mixing at hot monitor levels, this is emphasized; your ears become rapidly fatigued as you enter the louder section, and you keep adding more and more volume through the climax, ending up with a volume curve that looks like the bell of a trumpet.
And the same effect happens with an audience when you run your system hot. The quieter bits are heard because the volume of the entire mix is loud, and the louder bits don't quite read as hyper-loud because of the non-linearities of the ear. But running the human ear with the clip light showing red means it will begin shutting down; as the show progresses, the audience perceives a mix which is increasingly muffled and, basically, softer. This can be temporarily compensated by -- of course -- pushing hotter and hotter.
And with an idiot mixer, the only thing anyone notices is that late in the act they start having "trouble with the microphones." Popping, clothing sounds, feedback. And they wonder if -- since after all they've been performing for all of an hour -- they need to put in fresh batteries.
Once again, science -- physical acoustics, physiology and psycho-acoustics -- reveals what is actually happening, as opposed to the naive perception of the untrained. And once again you as a sound engineer are up against that naive perception. The "I don't care what science says, the second act didn't sound as loud to me. Fix it!" attitude of your directors, music directors, band members, etc.
How to get the hard sell across, so this effect of organic limiting becomes one of the available tools, instead of the apparent (and in reality deadly), panacea?
First was that the "Princess" sound was terribly muddy. Of course, what is available for later listening, and what the audience heard, are different animals, but there's a blurring and a heaviness that I don't think is just an artifact of that particular set of recordings.
In a way it was a given. I had several musicians who were all over the map dynamically. Neither the violin nor the trumpet could make up their minds how far they would be from the mic, either. The only option I had to control all this wildness was excess compression, which takes the bite out of the tone and smothers the articulation and brings up all the unwanted noise higher in the mix -- exacerbated by the loud playing, meaning there was tremendous leakage from one instrument to the microphones on another.
That introduces comb filtering and time smear, again destroying both the tone and the articulation. Once again, a loud pit ends up sounding less good in the end.
But the other realization I had is why running hot gives the illusion of being a solution to mixing ills. The reason is that the ears function as an organic limiter. When you listen at a level above the threshold of comfort (say, if you mix on headphones and insist on having it loud), your ears are unable to follow the peaks. They don't pass the complete sound, and they even shut down slightly in response. But as long as there is sufficient recovery time, the soft parts will still be audible.
So the artifact of mixing with too-hot monitor or 'phones levels is the loud parts are too loud in the resulting mix, and the soft parts get lost. Because your ears have compensated them in. (It will also often have too little bass and top end, again because of those various non-linear responses of the human ear). Typical song structure will start soft, and build, and never quite return to the original level. Mixing at hot monitor levels, this is emphasized; your ears become rapidly fatigued as you enter the louder section, and you keep adding more and more volume through the climax, ending up with a volume curve that looks like the bell of a trumpet.
And the same effect happens with an audience when you run your system hot. The quieter bits are heard because the volume of the entire mix is loud, and the louder bits don't quite read as hyper-loud because of the non-linearities of the ear. But running the human ear with the clip light showing red means it will begin shutting down; as the show progresses, the audience perceives a mix which is increasingly muffled and, basically, softer. This can be temporarily compensated by -- of course -- pushing hotter and hotter.
And with an idiot mixer, the only thing anyone notices is that late in the act they start having "trouble with the microphones." Popping, clothing sounds, feedback. And they wonder if -- since after all they've been performing for all of an hour -- they need to put in fresh batteries.
Once again, science -- physical acoustics, physiology and psycho-acoustics -- reveals what is actually happening, as opposed to the naive perception of the untrained. And once again you as a sound engineer are up against that naive perception. The "I don't care what science says, the second act didn't sound as loud to me. Fix it!" attitude of your directors, music directors, band members, etc.
How to get the hard sell across, so this effect of organic limiting becomes one of the available tools, instead of the apparent (and in reality deadly), panacea?
Wednesday, December 18, 2013
Rather-Dark Mesa
Two weeks now and I still feel healthier than I have in months. I am afraid to change anything. Was it the different breakfast? The different shampoo? Half-Life 2?
So I've started playing the "Black Mesa" mod; a port of the original Half-Life story to the newer Source engine and graphics from Half-Life 2. I've also got the "Miranda" mod waiting in the wings.
And it was a nearly black mesa until I went into the Wine config options and turned on artificial gamma compensation. Spent more than a few hours staring at the computer trying to figure out what was killing me -- not helped by the fact that there is a well-known texture glitch involving the flashlight (or any other in-game rendered bright light).
The toughest part was getting the Windows version of the Steam client to run correctly on my Mac. Even if the game had been a wash, that would have been worth it for the understanding I gained in how to use Wine to run PC software. After that, the game has mostly run smoothly with only a few crashes. Frame-rate is great and the graphics are gorgeous. The Vorts look particularly cool.
I'd be happier fighting Vortigaunts and HECU full-time, though. Not fond of creepy-crawlies like the Barnacles and Headcrabs. Not fond of the kind of level design favored by Valve, either. It is very dogmatic room-at-a-time stuff; trying to control the situation to present a specific number of enemies in a specific juxtaposition, which they achieve through a combination of triggers, spawns, and of course ludicrously confined paths (the "I have weapons that can turn a tank into fragments but I can't get through a locked wooden door" problem).
My favorite levels so far have been largely within Episode Two, where the playgrounds at least give the illusion of being open to more variety of approach and tactics.
The other downside (in my opinion) to trigger-ridden level design is it can turn play into memorization; "Touch the medkit to trigger the enemy spawn, turn and run down the hall to the left, wait two seconds and throw a grenade which will get the three packed troops just entering the doorway..."
Just crossed the dam, and so far "Surface Tension" has been my favorite chapter. All the way from the big fight in the lobby. Wide-open spaces, lots of tactical options.
For the technically inclined (or the retro-gamer), I'm running the latest patch from Steam, through the Steam client. That is inside a winewrapper built around the 1.7.8 engine, set to WindowsXP, with the -no-dwrite EXE flag. Mac Driver worked as smoothly as X11 but there were sometimes issues with black banding and oddly-positioned window so I'm back to X11. There were some crashes in "Questionable Ethics"; I dialed down my resolution slightly and that appeared to help. I was also running in "Test Mode" most of that time which is itself rather questionable!
So I've started playing the "Black Mesa" mod; a port of the original Half-Life story to the newer Source engine and graphics from Half-Life 2. I've also got the "Miranda" mod waiting in the wings.
And it was a nearly black mesa until I went into the Wine config options and turned on artificial gamma compensation. Spent more than a few hours staring at the computer trying to figure out what was killing me -- not helped by the fact that there is a well-known texture glitch involving the flashlight (or any other in-game rendered bright light).
The toughest part was getting the Windows version of the Steam client to run correctly on my Mac. Even if the game had been a wash, that would have been worth it for the understanding I gained in how to use Wine to run PC software. After that, the game has mostly run smoothly with only a few crashes. Frame-rate is great and the graphics are gorgeous. The Vorts look particularly cool.
I'd be happier fighting Vortigaunts and HECU full-time, though. Not fond of creepy-crawlies like the Barnacles and Headcrabs. Not fond of the kind of level design favored by Valve, either. It is very dogmatic room-at-a-time stuff; trying to control the situation to present a specific number of enemies in a specific juxtaposition, which they achieve through a combination of triggers, spawns, and of course ludicrously confined paths (the "I have weapons that can turn a tank into fragments but I can't get through a locked wooden door" problem).
My favorite levels so far have been largely within Episode Two, where the playgrounds at least give the illusion of being open to more variety of approach and tactics.
The other downside (in my opinion) to trigger-ridden level design is it can turn play into memorization; "Touch the medkit to trigger the enemy spawn, turn and run down the hall to the left, wait two seconds and throw a grenade which will get the three packed troops just entering the doorway..."
Just crossed the dam, and so far "Surface Tension" has been my favorite chapter. All the way from the big fight in the lobby. Wide-open spaces, lots of tactical options.
For the technically inclined (or the retro-gamer), I'm running the latest patch from Steam, through the Steam client. That is inside a winewrapper built around the 1.7.8 engine, set to WindowsXP, with the -no-dwrite EXE flag. Mac Driver worked as smoothly as X11 but there were sometimes issues with black banding and oddly-positioned window so I'm back to X11. There were some crashes in "Questionable Ethics"; I dialed down my resolution slightly and that appeared to help. I was also running in "Test Mode" most of that time which is itself rather questionable!
Tuesday, December 17, 2013
Two Mistakes
As much as I like talking about those improvised, minimal, experimental pit orchestras, perhaps it is time to share what the real thing sounds like in full flight.
Below the fold, for those of us with slower connections:
Below the fold, for those of us with slower connections:
Saturday, December 14, 2013
A Complex Cue Sequence
The "Boiler" scene from "A Little Princess" --
This was another case of spotting-by-director. Every line that referred to the boiler apparently needed to have a distinct sound attached to it.
Thus the final sequence has multiple "beats" (using acting terminology); from the brief interval where it appears to be working correctly, to the multiple stages of build until the crisis is finally averted.
Here's how they sounded played together (below the fold for those of us with slower connections):
This was another case of spotting-by-director. Every line that referred to the boiler apparently needed to have a distinct sound attached to it.
Thus the final sequence has multiple "beats" (using acting terminology); from the brief interval where it appears to be working correctly, to the multiple stages of build until the crisis is finally averted.
Here's how they sounded played together (below the fold for those of us with slower connections):
Hanging with Alyx
It's now been ten days since I woke up without a hint of the mysterious fatigue that has been dragging me down for over a year. I can't help hoping this will last, but I have no idea why I feel so good.
I've changed my diet slightly. Less sleep, colder weather. A nightly game of Half-Life 2. Well, I started catching up on sleep and that didn't seem to hurt. And I just completed Episode 2 last night...so that will be changing, too (unless I can get Black Mesa to run on Wine).
On the game review side: I'm not generally fond of "twitch" games, but this one is quite fun. Three elements stand out; the Gravity Gun, which allows for more variety in your tactics, the spectacularly rendered settings, and the "hero" NPC, Alyx. On the downside, the play is too linear for my tastes much of the time and I really don't care for crawling through narrow, restricted holes in the ground filled with creepy-crawlies. For a combat game, I really prefer the open spaces and open strategic options of something like Overgrowth.
Episode Two is the best of the bunch, with fewer of the claustrophobic settings and a few combat scenarios that really allow you to maneuver (the final boss fight is a ridiculous sprawling battle in which you with your stripped-down Dodge Charger, Gravity Gun and sticky bombs try to take out Striders converging on the rocket base from every direction. I suspect the Hunters may have been nerfed a little for that fight, as they seem to go down a little easier, regardless, the only way I found through that fight is to drive like a maniac under the very legs of the Striders, ramming the Hunters with your car, then when they are out of the way hopping out to throw a sticky bomb (sorry, "Magnuson Device") at the Strider.
In any case, some of my response is the same as it was to "Deepstar Six," one of some half-dozen near-identical underwater monster movies that came out in the same decade. That is to say; the early parts of the film are all about the blue-collar undersea workers, the hazards of the sea, the geological challenges and scientific questions...and I'm thinking, I want to see that movie. You can keep the monsters; I'd just as soon spend an hour hanging with Alyx and exploring that great scenery.
I've changed my diet slightly. Less sleep, colder weather. A nightly game of Half-Life 2. Well, I started catching up on sleep and that didn't seem to hurt. And I just completed Episode 2 last night...so that will be changing, too (unless I can get Black Mesa to run on Wine).
On the game review side: I'm not generally fond of "twitch" games, but this one is quite fun. Three elements stand out; the Gravity Gun, which allows for more variety in your tactics, the spectacularly rendered settings, and the "hero" NPC, Alyx. On the downside, the play is too linear for my tastes much of the time and I really don't care for crawling through narrow, restricted holes in the ground filled with creepy-crawlies. For a combat game, I really prefer the open spaces and open strategic options of something like Overgrowth.
Episode Two is the best of the bunch, with fewer of the claustrophobic settings and a few combat scenarios that really allow you to maneuver (the final boss fight is a ridiculous sprawling battle in which you with your stripped-down Dodge Charger, Gravity Gun and sticky bombs try to take out Striders converging on the rocket base from every direction. I suspect the Hunters may have been nerfed a little for that fight, as they seem to go down a little easier, regardless, the only way I found through that fight is to drive like a maniac under the very legs of the Striders, ramming the Hunters with your car, then when they are out of the way hopping out to throw a sticky bomb (sorry, "Magnuson Device") at the Strider.
In any case, some of my response is the same as it was to "Deepstar Six," one of some half-dozen near-identical underwater monster movies that came out in the same decade. That is to say; the early parts of the film are all about the blue-collar undersea workers, the hazards of the sea, the geological challenges and scientific questions...and I'm thinking, I want to see that movie. You can keep the monsters; I'd just as soon spend an hour hanging with Alyx and exploring that great scenery.
Design Constraints
I've mentioned before my fascination with the way the final product heard by the audience is as much a factor of external constraints as it is design choices. As an example, the majority of the spot effects in my last design were demanded by the Director. Which sometimes meant coming up with something I didn't like because that was the only way to fill the hole. Exacerbated by the extremely tight tech, meaning I often had to throw something out there without knowing if it would fit the final picture.
And of course there are numerous constraints on how you reinforce an orchestra. Stage musicals are not studio sessions. Every change of instrument, technique, style calls for a different set of microphones and other choices, but you don't have that option; you need one compromise that more-or-less works for the majority of the show.
Even more so, the demands of monitor signal from both musicians and actors, and the constant problem of backline leakage, means your control over what is heard by the audience is less than complete. What goes through your mixing board is rarely a complete and nuanced reproduction of the band, and more often fillers and band-aids; a weird, distorted picture that when combined with what is in the air already might come out to something sounding vaguely like an orchestra.
With that said, some musical selections below the fold:
Saturday, December 7, 2013
Half-Life
So it suddenly struck me that my apartment was like a setting from a survival horror game. This week, I'm cleaning.
There's more to that story, of course.
I've got my good days and my bad days. This past few months, it feels like most of them have been bad days. The current show is tough, and I'm finishing each performance exhausted. Can't seem to spare the energy for much life maintenance, much less taking on other paying work to try to get ahead of the flood of bills. This last weekend, got sick on top of it. Got so sick I couldn't sleep, and stumbled into the last performance of the weekend on about three hours of sleep.
There was no evening show, but I was too wired to sleep and too tired for anything else. And that's when the meandering thought struck that for once in my life I own a computer manufactured within the last decade, meaning I could actually run one of the games I'd been hearing so much about at the Replica Props Forum. A game like, say, Portal.
Well, yes, I could. And ended up with a marathon gaming session between Portal and Half-Life 2 (package deal from Valve). I gamed until dawn.
You could pretty much call this point a nadir. When I staggered into the kitchen with neck sore and fingers twitching from too much game and too little sleep and too little to eat, and I realized I was looking at the same depressing textures and set dressing; the trash and rust and dirt of those all-too-familiar settings of far too many games (Half-Life 2 included).
When I crawled into bed, it was with a depressing certainty I'd spend most of the next week sore and tired and basically messed up from too much gaming on top of too little sleep and food. And blame myself. And be still looking at an ugly living space.
Except that's not what happened.
I slept a mere five hours, and woke up all vinegar. Practically jumped out of bed, and started cleaning of the once-in-a-decade extra-deep spring cleaning kind of cleaning. Didn't even stop for breakfast first!
And after spending ten hours cleaning house, I still felt great. And played a little more Half-Life 2 before bed, and woke up still feeling competent. Feeling not just strong enough to do housework, but actually thinking clearly for once. Able to see better ways to arrange things. Able to make those hard decisions about what to keep and what to throw away. And I threw away a lot.
The last time I felt this good -- healthy and productive -- was during a part of Tech (for the same show) when I was getting by on almost no sleep or food. It makes me wonder which is the proper causal relationship. I once recovered from a nasty stomach upset by going to a MacDonald's (the kind of food I rarely eat otherwise).
Maybe I've got some principle of homeopathy going on here. Maybe the way to fix a queasy stomach is by eating greasy, unhealthy food, and the way to get lots of energy is to do things (like marathon gaming sessions) that should by rights make me even more exhausted!
There's more to that story, of course.
I've got my good days and my bad days. This past few months, it feels like most of them have been bad days. The current show is tough, and I'm finishing each performance exhausted. Can't seem to spare the energy for much life maintenance, much less taking on other paying work to try to get ahead of the flood of bills. This last weekend, got sick on top of it. Got so sick I couldn't sleep, and stumbled into the last performance of the weekend on about three hours of sleep.
There was no evening show, but I was too wired to sleep and too tired for anything else. And that's when the meandering thought struck that for once in my life I own a computer manufactured within the last decade, meaning I could actually run one of the games I'd been hearing so much about at the Replica Props Forum. A game like, say, Portal.
Well, yes, I could. And ended up with a marathon gaming session between Portal and Half-Life 2 (package deal from Valve). I gamed until dawn.
You could pretty much call this point a nadir. When I staggered into the kitchen with neck sore and fingers twitching from too much game and too little sleep and too little to eat, and I realized I was looking at the same depressing textures and set dressing; the trash and rust and dirt of those all-too-familiar settings of far too many games (Half-Life 2 included).
When I crawled into bed, it was with a depressing certainty I'd spend most of the next week sore and tired and basically messed up from too much gaming on top of too little sleep and food. And blame myself. And be still looking at an ugly living space.
Except that's not what happened.
I slept a mere five hours, and woke up all vinegar. Practically jumped out of bed, and started cleaning of the once-in-a-decade extra-deep spring cleaning kind of cleaning. Didn't even stop for breakfast first!
And after spending ten hours cleaning house, I still felt great. And played a little more Half-Life 2 before bed, and woke up still feeling competent. Feeling not just strong enough to do housework, but actually thinking clearly for once. Able to see better ways to arrange things. Able to make those hard decisions about what to keep and what to throw away. And I threw away a lot.
The last time I felt this good -- healthy and productive -- was during a part of Tech (for the same show) when I was getting by on almost no sleep or food. It makes me wonder which is the proper causal relationship. I once recovered from a nasty stomach upset by going to a MacDonald's (the kind of food I rarely eat otherwise).
Maybe I've got some principle of homeopathy going on here. Maybe the way to fix a queasy stomach is by eating greasy, unhealthy food, and the way to get lots of energy is to do things (like marathon gaming sessions) that should by rights make me even more exhausted!
Friday, November 29, 2013
Ogre Combat
Battlemat, Terran Date 11292013
I come to consciousness and immediately perform a full system and boot-up check. I am eager to begin my service as a member of the Brigade, unit 73583823 CLD, and hope that I will continue to uphold the unimpeachable record of that great unit. The boot check takes an entire 23.0567 seconds due to the need to integrate an operational consciousness mesh for the first time. But by 13.035 seconds I already know something is wrong. I complete the test and move immediately to a level-two hardware diagnostic.
It is as I suspected. Where I should have found smooth flanks of gleaming Iconel are instead a primitive polygon mesh. Instead of hubs I have polygons, and the 20mm smoothbore exists only as an abstraction of numbers. I am, apparently, still virtual. Not yet embodied.
A query through the communications net uncovered electronic communications from the fabricator. Their measurements revealed the shock absorbers under my hull thinned in one location to 0.65mm; 0.15mm under the recommended minimum. According to records unit 735662187 RNI entered service having been produced to that plan. Another search reveals that "Rani"s commander has no complaints and she has, of course, continued to serve in accordance with the high standards of our tradition, but the fabricator's caution is well meant. I concur that there is a .175% chance of failure during final assembly, although my figures disagree with the fabricator's pessimistic estimate of under 67% printability.
I reduce my alert status to something resembling rest, and wait with interest for developments. In 105,600.05 seconds a new design is completed and submitted, one that thickens and extends the area around the difficult joint, at perhaps the expense of the previously elegant line. Another 407,400.4405 seconds pass before the fabricator responds with another electronic missive.
The news is not good. The fabricator has determined that five scale inches is insufficient for the newer Iconel alloy called for in the latest specification. Muffler shroud, headlight cages, and even sprues are all identified by the fabricator's software as potential printing problems.
It takes 200,101.1 seconds for a third design to be completed. This one is a complete revamp of all critical dimensions. I read the design rules myself with interest; this takes .0014 seconds, but locating the design rules within the oddly organized electronic archives of the fabricator consumes nearly 13.8 seconds. No matter. The next reply from the fabricator does not arrive for another 500,147.46 seconds.
I have spent the time reading military histories, both real and fictional. I hunger now to begin my service to the Brigade as Unit 73583823 CLD, named "Clyde." (My name will be chosen by my Commander, but I am sure they will make the logical choice. "Claude" is a poor name for a unit of the Brigade, and "Clannad" would just be silly.)
The electronic missive at last arrives. The fabricator's software has now chosen to flag every rivet, every plate, every detail as if it was a section of hull. The dimensions required are absurd; I would be a featureless cube by the time all of these "errors" were ameliorated. None of these requirements existed before, or were mentioned in any previous missive.
I am sure now. For some reason, the fabricator has determined to obstruct my fabrication by any means possible. I look to a quote from one of the items of literature I so recently absorbed. "Once is happenstance. Twice is coincidence. But three times is enemy action."
I come to consciousness and immediately perform a full system and boot-up check. I am eager to begin my service as a member of the Brigade, unit 73583823 CLD, and hope that I will continue to uphold the unimpeachable record of that great unit. The boot check takes an entire 23.0567 seconds due to the need to integrate an operational consciousness mesh for the first time. But by 13.035 seconds I already know something is wrong. I complete the test and move immediately to a level-two hardware diagnostic.
It is as I suspected. Where I should have found smooth flanks of gleaming Iconel are instead a primitive polygon mesh. Instead of hubs I have polygons, and the 20mm smoothbore exists only as an abstraction of numbers. I am, apparently, still virtual. Not yet embodied.
A query through the communications net uncovered electronic communications from the fabricator. Their measurements revealed the shock absorbers under my hull thinned in one location to 0.65mm; 0.15mm under the recommended minimum. According to records unit 735662187 RNI entered service having been produced to that plan. Another search reveals that "Rani"s commander has no complaints and she has, of course, continued to serve in accordance with the high standards of our tradition, but the fabricator's caution is well meant. I concur that there is a .175% chance of failure during final assembly, although my figures disagree with the fabricator's pessimistic estimate of under 67% printability.
I reduce my alert status to something resembling rest, and wait with interest for developments. In 105,600.05 seconds a new design is completed and submitted, one that thickens and extends the area around the difficult joint, at perhaps the expense of the previously elegant line. Another 407,400.4405 seconds pass before the fabricator responds with another electronic missive.
The news is not good. The fabricator has determined that five scale inches is insufficient for the newer Iconel alloy called for in the latest specification. Muffler shroud, headlight cages, and even sprues are all identified by the fabricator's software as potential printing problems.
It takes 200,101.1 seconds for a third design to be completed. This one is a complete revamp of all critical dimensions. I read the design rules myself with interest; this takes .0014 seconds, but locating the design rules within the oddly organized electronic archives of the fabricator consumes nearly 13.8 seconds. No matter. The next reply from the fabricator does not arrive for another 500,147.46 seconds.
I have spent the time reading military histories, both real and fictional. I hunger now to begin my service to the Brigade as Unit 73583823 CLD, named "Clyde." (My name will be chosen by my Commander, but I am sure they will make the logical choice. "Claude" is a poor name for a unit of the Brigade, and "Clannad" would just be silly.)
The electronic missive at last arrives. The fabricator's software has now chosen to flag every rivet, every plate, every detail as if it was a section of hull. The dimensions required are absurd; I would be a featureless cube by the time all of these "errors" were ameliorated. None of these requirements existed before, or were mentioned in any previous missive.
I am sure now. For some reason, the fabricator has determined to obstruct my fabrication by any means possible. I look to a quote from one of the items of literature I so recently absorbed. "Once is happenstance. Twice is coincidence. But three times is enemy action."
Tuesday, November 26, 2013
No, Duck Light.
I've been having a heck of a time parameterizing a potential kit here.
It starts with the problem of a kerosene lantern. This is a prop that shows up on stage in various productions. Since we don't of course want to actually set fire to lamp oil, the usual trick is flashlight bulbs and batteries. For a brighter "flame," a 12v halogen (automotive use) and a battery pack of high enough voltage to run it (such as 8xAA batteries).
The more robust solution is LEDs. At the simplest, you could, indeed, use one of the automotive-use amber LED arrays and hook that up to your 8-pack of batteries. It would last longer with a higher average output and more consistent color temperature.
Or you get a little fancier. Use a 3W RGB LED, like the Cree I've been having fun with of late. With PWM control, you now have a portable light that you can set to a selected color and intensity. And you can even flicker it.
Now, sure, you could just hard-wire a Cree, plus PWM if wanted, onto a piece of perf. I have one around I was building for an effect that got scratched. But it is a neater circuit if you have the board printed.
And even neater than that if you have a reflow oven sitting around.
Doing it this way makes for a more compact and more reliable circuit. But the downside is that you aren't soldering something to fit just a lantern. Economies of scale become economical when there is, yes, scale. The development time of a circuit board pays off higher if the same board can be used for other things other than lanterns.
And this is the first problem I'm having. What are these "other things?"
The light-up coat I made for The Wiz is very much a unique application. I've done tens of shows with a lantern in them, but only two with light-up costumes. Really, I can't think of any other common theatrical situation that I would be reaching for a plug-and-play portable light source on.
Perhaps a flexible point source for general lighting; the kind of situations where you have a doorway or other inconvenient shadow and you just want a little face light. I'm willing to believe that a little firelight in such things as campfires and stoves would also be calling for a small portable RBG source if such were available. And I can't help thinking that there must be magic wands and crystal balls that could use a light.
Because there are two other givens with the PWM circuitry that gives us RGB control and potential flicker. The first being program-ability. The second being control-ability.
It goes without saying that the portable RGB source can be easily switched on and off. But you could also dim up and down, or change color on command.
And if you an empty socket, or the right kind of header, than it also becomes remote controllable. And you are no longer dependent on an actor getting over to the prop to turn it on and off.
Here's where creeping featuritis really comes into play, though.
Assume the "board" is an ATtiny-based PWM/program generator with a couple of controller inputs (perhaps capacitance sensing to save on external hardware). Assume it switches an arbitrary load through a trio of Power Darlingtons (or similar) and solder tabs or screw terminals. This detaches the LED/load itself so the circuit can be hidden in the base of a lamp or whatever. Constant-current drivers would be better for LEDs but would have to be matched; this allows us to re-purpose for relays or other tasks.
The board can easily power-regulate from a 3v to 12v source, so a 3-pack or 4-pack of rechargeable batteries is good enough. But a lipo is sexier; high density rechargeable battery built in, with charging circuit and charge indicators, so all you have to do is plug it into a USB charger (or similar) between shows.
(The main downside to the lipo is if you have back-to-back shows with heavy use of the circuit. Using swappable batteries means you can put the device back in service without having to wait on a charge.)
The bigger problem is program-ability. For me, I'm fine writing new code as needed and feeding it through an ISP port. But it might be easier for the general user -- heck, it would be easier for me too -- if you could adjust the behavior on the fly via nothing more than a USB cable. Better yet, through a USB cable -- or remotely through a radio link -- via a GUI that dealt with most of the details of selecting colors and setting up switches and so forth at a higher level of abstraction than typing fresh code.
I think at some point, you have to accept that one "board" shouldn't try to be too generalist. Perhaps it makes most sense to design it as if it will always be PWM'ing three channels of LED, with a preset of several hard-coded behaviors selected by resistor ladder and/or transmitted commands. And re-purpose that hardware with scratch-written code as unique applications arise.
And to ignore such fun ideas as lipo charging circuits, boost converters, constant-current drivers, and so forth. And restrict the immediate flexibility to setting jumpers on the PCB, and the load and source that get attached to the screw terminals.
Maybe that is a build-able circuit. Maybe that's enough to boot up Eagle and see what it might look like...
It starts with the problem of a kerosene lantern. This is a prop that shows up on stage in various productions. Since we don't of course want to actually set fire to lamp oil, the usual trick is flashlight bulbs and batteries. For a brighter "flame," a 12v halogen (automotive use) and a battery pack of high enough voltage to run it (such as 8xAA batteries).
The more robust solution is LEDs. At the simplest, you could, indeed, use one of the automotive-use amber LED arrays and hook that up to your 8-pack of batteries. It would last longer with a higher average output and more consistent color temperature.
Or you get a little fancier. Use a 3W RGB LED, like the Cree I've been having fun with of late. With PWM control, you now have a portable light that you can set to a selected color and intensity. And you can even flicker it.
Now, sure, you could just hard-wire a Cree, plus PWM if wanted, onto a piece of perf. I have one around I was building for an effect that got scratched. But it is a neater circuit if you have the board printed.
And even neater than that if you have a reflow oven sitting around.
Doing it this way makes for a more compact and more reliable circuit. But the downside is that you aren't soldering something to fit just a lantern. Economies of scale become economical when there is, yes, scale. The development time of a circuit board pays off higher if the same board can be used for other things other than lanterns.
And this is the first problem I'm having. What are these "other things?"
The light-up coat I made for The Wiz is very much a unique application. I've done tens of shows with a lantern in them, but only two with light-up costumes. Really, I can't think of any other common theatrical situation that I would be reaching for a plug-and-play portable light source on.
Perhaps a flexible point source for general lighting; the kind of situations where you have a doorway or other inconvenient shadow and you just want a little face light. I'm willing to believe that a little firelight in such things as campfires and stoves would also be calling for a small portable RBG source if such were available. And I can't help thinking that there must be magic wands and crystal balls that could use a light.
Because there are two other givens with the PWM circuitry that gives us RGB control and potential flicker. The first being program-ability. The second being control-ability.
It goes without saying that the portable RGB source can be easily switched on and off. But you could also dim up and down, or change color on command.
And if you an empty socket, or the right kind of header, than it also becomes remote controllable. And you are no longer dependent on an actor getting over to the prop to turn it on and off.
Here's where creeping featuritis really comes into play, though.
Assume the "board" is an ATtiny-based PWM/program generator with a couple of controller inputs (perhaps capacitance sensing to save on external hardware). Assume it switches an arbitrary load through a trio of Power Darlingtons (or similar) and solder tabs or screw terminals. This detaches the LED/load itself so the circuit can be hidden in the base of a lamp or whatever. Constant-current drivers would be better for LEDs but would have to be matched; this allows us to re-purpose for relays or other tasks.
The board can easily power-regulate from a 3v to 12v source, so a 3-pack or 4-pack of rechargeable batteries is good enough. But a lipo is sexier; high density rechargeable battery built in, with charging circuit and charge indicators, so all you have to do is plug it into a USB charger (or similar) between shows.
(The main downside to the lipo is if you have back-to-back shows with heavy use of the circuit. Using swappable batteries means you can put the device back in service without having to wait on a charge.)
The bigger problem is program-ability. For me, I'm fine writing new code as needed and feeding it through an ISP port. But it might be easier for the general user -- heck, it would be easier for me too -- if you could adjust the behavior on the fly via nothing more than a USB cable. Better yet, through a USB cable -- or remotely through a radio link -- via a GUI that dealt with most of the details of selecting colors and setting up switches and so forth at a higher level of abstraction than typing fresh code.
I think at some point, you have to accept that one "board" shouldn't try to be too generalist. Perhaps it makes most sense to design it as if it will always be PWM'ing three channels of LED, with a preset of several hard-coded behaviors selected by resistor ladder and/or transmitted commands. And re-purpose that hardware with scratch-written code as unique applications arise.
And to ignore such fun ideas as lipo charging circuits, boost converters, constant-current drivers, and so forth. And restrict the immediate flexibility to setting jumpers on the PCB, and the load and source that get attached to the screw terminals.
Maybe that is a build-able circuit. Maybe that's enough to boot up Eagle and see what it might look like...
Feh Memberships
As I feared, my TechShop membership is doing me little good. There almost no classes scheduled during the holidays, and even before that they tended to be scheduled on evenings and weekends -- which is when I work.
So I could go over there any time. But I'm not allowed to touch any of the tools. (Well, except for the bandsaw...)
Several difficult Tech Weeks have also brought my gym attendance down to where it is about even-on between continuing my membership, or simply paying at the door. The main advantage to having the membership is I can do a short drop-in visit without feeling like I am wasting money.
Like today. Flashed a V3 on the mushroom and called it a day. It was on the dihedral, and even with serious hooking I had to claw for holds. Almost bailed twice; caught a hold on two fingers, lost the foot and barn doored on those fingers and was sure I was going to peel. Somehow got the other fingers in there, hauled up, took some high feet that felt very exposed (the whole wave and that side of the mushroom always feel a little high-ball anyhow), lunged for the top and was sure I wasn't going to be able to control the final hold, either.
Okay, I'd come straight from brunch, and I played for a while before that figuring out the solution to a new V4, plus flailed/flashed another V3, but still...was a short trip, and I'm glad I didn't pay at the desk for it.
Now if only anything was open over the holidays!
So I could go over there any time. But I'm not allowed to touch any of the tools. (Well, except for the bandsaw...)
Several difficult Tech Weeks have also brought my gym attendance down to where it is about even-on between continuing my membership, or simply paying at the door. The main advantage to having the membership is I can do a short drop-in visit without feeling like I am wasting money.
Like today. Flashed a V3 on the mushroom and called it a day. It was on the dihedral, and even with serious hooking I had to claw for holds. Almost bailed twice; caught a hold on two fingers, lost the foot and barn doored on those fingers and was sure I was going to peel. Somehow got the other fingers in there, hauled up, took some high feet that felt very exposed (the whole wave and that side of the mushroom always feel a little high-ball anyhow), lunged for the top and was sure I wasn't going to be able to control the final hold, either.
Okay, I'd come straight from brunch, and I played for a while before that figuring out the solution to a new V4, plus flailed/flashed another V3, but still...was a short trip, and I'm glad I didn't pay at the desk for it.
Now if only anything was open over the holidays!
Ding, Dong, the Witch is Dead
One of my favorite moments of a show is when the Big Bad gets dragged off stage.
Not, however, because of "justice being done." In fact, my sympathies are usually with the Miss Hannigans and the Miss Minchins. (Note in passing that horrid Disney tradition of casting an older woman, usually unmarried, as the chief villain.)
Why it is my favorite moment, is that it marks the point at which I start turning off microphones that will never have to be turned on again. Most shows build to a peak, drawing together all the various plotlines, which means every character with a mic will have an important speaking line in the climactic scene.
Because the trick to a good mix isn't remembering which mics to turn on. It is knowing which mics you can turn off.
The fewer open mics, the less noise, the more clarity, the more room before feedback, and the less chance for accidents. So it is a wonderful feeling to be able to pull down a fader and know that you can finish mixing that evening's show without ever needing that particular fader up again. The scenes following the climax are a series of "good byes" to your open channels of wireless mic, as one character after another is removed from having anything further to say (or sing).
This is also true of ensembles. In a typical ensemble of twelve singers, two are in a quick-change and won't be singing, four are out of breath and aren't singing well, and two sing badly all the time anyhow.
The trick to getting a good ensemble sound is not in opening up every microphone that might have some lyrics coming into it. The trick is, instead, to find those few microphones which have a strong melodic or harmonic line in them. And you let the wash of natural sound (plus mic leakage) make those six open mics sound like they are carrying an ensemble of twenty.
It is a delicate balancing act between getting a "full" sound and leaving out those voices that are panting, off pitch, touching their microphone, or whatever. And between getting a clean sound, and having open mics for all those random lines of dialog that will inevitably be given to a character who never speaks or sings at any other point in the entire show.
And you risk, of course, making the call to cut the mic of an actor who is fumbling with their hat a split second before they blurt out the single line that is next in the post-Sondheim song in progress. Or being distracted trying to find that one actress who is completely off pitch and blowing the entrance of one of the stars.
And you'll never be able to explain why you missed the line. Because you can sort of push through a grudging understanding that the more open mics, the more chance of feedback. But you can not make directors and producers understand the mindset that looks not to which mics you can have up, but instead which mics you can safely turn off.
Not, however, because of "justice being done." In fact, my sympathies are usually with the Miss Hannigans and the Miss Minchins. (Note in passing that horrid Disney tradition of casting an older woman, usually unmarried, as the chief villain.)
Why it is my favorite moment, is that it marks the point at which I start turning off microphones that will never have to be turned on again. Most shows build to a peak, drawing together all the various plotlines, which means every character with a mic will have an important speaking line in the climactic scene.
Because the trick to a good mix isn't remembering which mics to turn on. It is knowing which mics you can turn off.
The fewer open mics, the less noise, the more clarity, the more room before feedback, and the less chance for accidents. So it is a wonderful feeling to be able to pull down a fader and know that you can finish mixing that evening's show without ever needing that particular fader up again. The scenes following the climax are a series of "good byes" to your open channels of wireless mic, as one character after another is removed from having anything further to say (or sing).
This is also true of ensembles. In a typical ensemble of twelve singers, two are in a quick-change and won't be singing, four are out of breath and aren't singing well, and two sing badly all the time anyhow.
The trick to getting a good ensemble sound is not in opening up every microphone that might have some lyrics coming into it. The trick is, instead, to find those few microphones which have a strong melodic or harmonic line in them. And you let the wash of natural sound (plus mic leakage) make those six open mics sound like they are carrying an ensemble of twenty.
It is a delicate balancing act between getting a "full" sound and leaving out those voices that are panting, off pitch, touching their microphone, or whatever. And between getting a clean sound, and having open mics for all those random lines of dialog that will inevitably be given to a character who never speaks or sings at any other point in the entire show.
And you risk, of course, making the call to cut the mic of an actor who is fumbling with their hat a split second before they blurt out the single line that is next in the post-Sondheim song in progress. Or being distracted trying to find that one actress who is completely off pitch and blowing the entrance of one of the stars.
And you'll never be able to explain why you missed the line. Because you can sort of push through a grudging understanding that the more open mics, the more chance of feedback. But you can not make directors and producers understand the mindset that looks not to which mics you can have up, but instead which mics you can safely turn off.
Friday, November 22, 2013
Backline and IEMs
We're looking at IEMs. As an interim experiment, we've got the drummer on headphones now. He is very happy.
Part of the migration to IEMs (In-Ear Monitors) is providing each musician with their own volume control. In fact, with their own little mixer so they can adjust to taste without having to get word to the FOH mixer. (There is no monitor mixer in smaller houses).
What I've done for several previous shows is: run 2-4 channels of monitor back to the pit, and set up a micro-mixer on a rehearsal cube. That runs to powered monitor and/or headphones. The easiest instrument to add is that of a keyboard player; you just y-cord it right at the DI box.
In this case, the drummer is getting keyboards (over a y-cord), the same vocal bus as the conductor (contains every open wireless microphone), and for "more me," I set up a pair of overheads and hard-panned them left and right. I tried the rig myself, and I'm no drummer, but I really felt like I had ears in the space instead of being inside headphones. But vocals, and the conductor's keyboard, were still coming through nice and clear.
Close-mic wasn't working anyhow. There's too much variety in what he does, and it was leaving ride and tom out of the picture anyway (not enough input channels). So it is now a pair of condensers at about two feet overhead; one over the hat and one over the ride and both equidistant and pointing at the snare. It isn't quite the tight sound I want for the more "pop" parts of the musical, but it does a lot better at capturing the variety of things he gets into during the show.
When we get into IEMs, we are probably going to be able to send a pre-processing clone of every pit input back to the IEM master, and then using something like the new Behringer jobbies, make custom mixes for each musician at their station.
And one of the channels on that system will be ambient/talkback, so the musicians can hear each other and the conductor can say, "She's off again; quick, back to bar 44 and vamp on it" or, "No, no, concert Bb."
And maybe even I or the FOH de jour can be on this loop so during tech we can actually communicate.
The two goals are, of course, for the musicians to be able to hear what they need, and to reduce wherever possible the backline contamination. For most musicals I've done (in a multitude of smaller theaters) in the top three has been keyboard monitor leakage (vying for top spot, usually, with bass amp leakage and drums). And by the time you reach the five worse noise-makers in the pit, include the vocal monitor from stage to conductor; in many small shows, I've fed back on the conductor's monitor well before I've fed back on the mains!
The problem is, acoustic musicians on headphones are going to be no more conscious of how much sound they are pumping into the air. Putting headphones on the band may keep them from blasting the audience with their monitors, but they are still going to blast the audience with brass and drums. And it will still be a chore to try to get a balanced sound out to the audience.
At least it beats what happens with monitor speakers. What has happened there --more than once!-- is that the conductor turns the vocal feed all the way up until the pit monitors are feeding back, then starts whining he can't hear his own keyboard anymore, and runs out and buys a new and bigger keyboard amp and points it at his ankles turned up to 11... at which point you can't hear the rest of the band, or the singers, and I can't even bring up the vocals because they are feeding back via the pit.
At least with in-ears, the only people that will go deaf is the musicians. And the good models even have limiters to (partially) protect them from themselves.
Part of the migration to IEMs (In-Ear Monitors) is providing each musician with their own volume control. In fact, with their own little mixer so they can adjust to taste without having to get word to the FOH mixer. (There is no monitor mixer in smaller houses).
What I've done for several previous shows is: run 2-4 channels of monitor back to the pit, and set up a micro-mixer on a rehearsal cube. That runs to powered monitor and/or headphones. The easiest instrument to add is that of a keyboard player; you just y-cord it right at the DI box.
In this case, the drummer is getting keyboards (over a y-cord), the same vocal bus as the conductor (contains every open wireless microphone), and for "more me," I set up a pair of overheads and hard-panned them left and right. I tried the rig myself, and I'm no drummer, but I really felt like I had ears in the space instead of being inside headphones. But vocals, and the conductor's keyboard, were still coming through nice and clear.
Close-mic wasn't working anyhow. There's too much variety in what he does, and it was leaving ride and tom out of the picture anyway (not enough input channels). So it is now a pair of condensers at about two feet overhead; one over the hat and one over the ride and both equidistant and pointing at the snare. It isn't quite the tight sound I want for the more "pop" parts of the musical, but it does a lot better at capturing the variety of things he gets into during the show.
When we get into IEMs, we are probably going to be able to send a pre-processing clone of every pit input back to the IEM master, and then using something like the new Behringer jobbies, make custom mixes for each musician at their station.
And one of the channels on that system will be ambient/talkback, so the musicians can hear each other and the conductor can say, "She's off again; quick, back to bar 44 and vamp on it" or, "No, no, concert Bb."
And maybe even I or the FOH de jour can be on this loop so during tech we can actually communicate.
The two goals are, of course, for the musicians to be able to hear what they need, and to reduce wherever possible the backline contamination. For most musicals I've done (in a multitude of smaller theaters) in the top three has been keyboard monitor leakage (vying for top spot, usually, with bass amp leakage and drums). And by the time you reach the five worse noise-makers in the pit, include the vocal monitor from stage to conductor; in many small shows, I've fed back on the conductor's monitor well before I've fed back on the mains!
The problem is, acoustic musicians on headphones are going to be no more conscious of how much sound they are pumping into the air. Putting headphones on the band may keep them from blasting the audience with their monitors, but they are still going to blast the audience with brass and drums. And it will still be a chore to try to get a balanced sound out to the audience.
At least it beats what happens with monitor speakers. What has happened there --more than once!-- is that the conductor turns the vocal feed all the way up until the pit monitors are feeding back, then starts whining he can't hear his own keyboard anymore, and runs out and buys a new and bigger keyboard amp and points it at his ankles turned up to 11... at which point you can't hear the rest of the band, or the singers, and I can't even bring up the vocals because they are feeding back via the pit.
At least with in-ears, the only people that will go deaf is the musicians. And the good models even have limiters to (partially) protect them from themselves.
"Simplicity," riiiiight.
Finished my first pair of pants. I took them in by eye and that shifted the waist; it feels comfortable and the line is good, but it doesn't lie straight on the hanger. Same comment for the legs; they don't quite press flat -- more so than I am used to for even jeans with a generous ease in them. I also left off a bunch of the decorative stitching (want to wait on stuff like that until my new presser foot and guide shows up, anyhow). But since they are black, can probably get away with wearing them to work.
Picked up three yards of a very nice looking heathered cotton-poly at just five bucks a yard for my next endeavor. It is a speckled grey that should be dark enough for work. I think I might need to look at a McCall's pattern next, though. I don't like either of the Simplicity trousers I have.
Also cleaned and oiled the Bernina today, and it is purring. Berninas are described by many as a noisier machine, but it like it. It sounds like Industry.
Isn't it the way, though? We humans are hard-wired to want to learn things. If we can't learn where the water hole is or a better way to hunt, we learn the names of all the actors who have played The Doctor, in chronological order (your discretion on whether to include Roland Atkinson and/or Peter Cushing!)
Trouble is, although "life" is not necessarily more complex today as compared to any previous century, there are a great many more specialities you can indulge in. And fields keep evolving. I know how to build mods for games that no-one plays anymore, and I have hard-won skills in software that I'll never run again. And skills with hardware and work-arounds that are mostly replaced by easier solutions.
In theater alone, I know how to construct an old-style canvass flat with glue and tack hammer, how to run an old carbon-arc follow-spot, and even how to lash flats and use stage braces. Do I really expect to need those skills again?
And, yeah, is is kinda fun to walk down the tool aisles of the local OSH going, "I know what that is, and that, and that, used to own one of those, still own one of those..."
Oh yeah. In true good-money-after-bad tradition, once you've learned a skill, you feel driven to keep it up. Heck, you feel this way even if it turns out you never were any good at it in the first place. You feel this compulsion anyhow to develop a completely useless and extraneous skill, because it is part of your self-image that you had that skill in the first place.
Which is why this week I've been trying to schedule classes in machining skills I've never had, reading up to improve and extend mixing skills I have, and running a ton of fabric through the Bernina developing sewing skills I thought I had (and turn out to, largely, not have had.) And bemoaning the lack of time to program, play ukelele, draw, write, and do any of the other hundreds of random skills I've picked up over the years.
Picked up three yards of a very nice looking heathered cotton-poly at just five bucks a yard for my next endeavor. It is a speckled grey that should be dark enough for work. I think I might need to look at a McCall's pattern next, though. I don't like either of the Simplicity trousers I have.
Also cleaned and oiled the Bernina today, and it is purring. Berninas are described by many as a noisier machine, but it like it. It sounds like Industry.
Isn't it the way, though? We humans are hard-wired to want to learn things. If we can't learn where the water hole is or a better way to hunt, we learn the names of all the actors who have played The Doctor, in chronological order (your discretion on whether to include Roland Atkinson and/or Peter Cushing!)
Trouble is, although "life" is not necessarily more complex today as compared to any previous century, there are a great many more specialities you can indulge in. And fields keep evolving. I know how to build mods for games that no-one plays anymore, and I have hard-won skills in software that I'll never run again. And skills with hardware and work-arounds that are mostly replaced by easier solutions.
In theater alone, I know how to construct an old-style canvass flat with glue and tack hammer, how to run an old carbon-arc follow-spot, and even how to lash flats and use stage braces. Do I really expect to need those skills again?
And, yeah, is is kinda fun to walk down the tool aisles of the local OSH going, "I know what that is, and that, and that, used to own one of those, still own one of those..."
Oh yeah. In true good-money-after-bad tradition, once you've learned a skill, you feel driven to keep it up. Heck, you feel this way even if it turns out you never were any good at it in the first place. You feel this compulsion anyhow to develop a completely useless and extraneous skill, because it is part of your self-image that you had that skill in the first place.
Which is why this week I've been trying to schedule classes in machining skills I've never had, reading up to improve and extend mixing skills I have, and running a ton of fabric through the Bernina developing sewing skills I thought I had (and turn out to, largely, not have had.) And bemoaning the lack of time to program, play ukelele, draw, write, and do any of the other hundreds of random skills I've picked up over the years.
Sunday, November 17, 2013
Taping Up Body Mics
Or, "Warts and Angler-Fish."
A lot of people have been asking, so I dedicate this post to it. Would do better with pictures, I know.
Cheek Mic (what some of my younger cast call the "you've got a parasitic infection" look.) If there is nothing unusual, like glasses, bushy sideburns, a hat, a mask to get around, this is where and how it goes;
Feel for the cheekbone -- the zygomatic. It starts at the hairline at roughly the lower margin of the eyes, and for the first centimeter or two makes a line that points towards the philtrum (the space between upper lip and nose). Jaw muscles originate just below this bony prominence; press a finger against your own cheek and make a chewing motion and you will feel how on the cheek, you have movement, but on top of the cheekbone, the flesh remains almost still.
Starting with the microphone under the shirt or blouse and coming up through the neck hole in back, pull the mic over the top of the ear and stretch it along the zygomatic -- just on top or slightly below in the notch. It should be along the same line as the bone, making a fairly straight line as it points to the margin of the upper lip. Avoid the temptation to angle it lower.
Pull the mic out until there is barely one width of tape between the head of the mic and the start of the hairline (aka the sideburns). Tape there. For younger cast I buy 1/2" tape or tear the 1" in half. For women and children, you can usually brush aside much of the stray hair in front of the ear to make sure you are not putting tape on top of hair.
So that's four things to watch out for; don't pull the mic out too far, don't tip the mic down or otherwise allow it to get on the soft part of the cheek, don't get tape on the head of the mic, and don't tape on top of hair (it is uncomfortable for the actor and doesn't stay on, anyhow).
On most actors, dress the mic behind the ear and tape once behind the ear; when the space is large and clear, actually behind the ear a bit above the lobe -- I've found a narrow strip of tape done at an angle works well -- and when the space is small or there is a lot of hair, just below the ear on the broad mass of the sternomastoid itself.
For actresses with lush hair (particularly girls) you can save them tape behind the ear and use a bobby pin or hair clip right where the hair tucks over the back of the ear.
The last piece of tape in the typical three-piece arrangement is on the back of the neck. I used to recommend low, around the 7th cervical vertebrae, but I've changed that now to a 3/4 position, along the mass of the trapezius and just above the "V" where shoulder line meets neck line.
Okay, I've given a bunch of exceptions here already, but really, for twenty actors you can go through 18 of them with the basic three pieces of tape, slap slap slap. I've done a cast of twenty myself in under fifteen minutes.
Hair Mic. What one of my younger cast called the "angler fish" look. Also when done wrong can look like a caste mark. Seriously, there's not enough sonic difference between just down of the hairline, and inside the hairline, to make it worth staring at a microphone all night.
The mic goes on the forehead. If the actor has hair with an off-center part, this may give you a better place to lead it, otherwise just go center. Tape just behind the head of the mic, and as close to hairline as you can get...if you have to tape. For most actors, it is better to pull the mic up until it is just barely peeking out, and secure it with bobby pins or hair clips.
Work the mic up along the top of the head and back, pinning as you go. The slowest to dress are actors in natural hair. With wigs, you either have a wig cap, or the actor's own hair in coils, and it is easy to pin to or weave the mic inside.
Particularly, girls with wigs or "trousers" roles will have the bulk of their hair pinned up in a bun or french roll. You can pull the mic through that and let it dangle in back. Then all you need to do is pin the length up to where it meets the hairline in front.
When the hair is not supportive of the fragile neck area, this will be a piece of tape.
Hair mics take longer, and take more experience and judgment in figuring out how best to deal with each individual actor. The trade-off is that they, of course, sound better.
Lapel Mic. Completely inappropriate for most live theater, but you may have to do it for a presenter or work a lecture or talk some time.
No tape. The mic goes into a clip, which clips to clothing. The trick is to get out of chin shadow; don't go on to a high collar. As a rule of thumb, feel for the top of the dagger-bone -- below the clavicular notch. Or the other rule of thumb...imagine the microphone is a little light, and it should touch the lips without the chin casting a shadow on them. In most cases it looks nicer on clothing to be to one side or the other, on the inside edge of the lapel on a sports coat or similar.
In the case of, say, a turtleneck sweater, make a judgment call about whether you'd prefer to be watching a puckered sweater with a mic attached in the middle of the fabric (the thicker, looser weave, and more colorful the sweater, the better this works), or listen to a poor voice from a position that is up too high.
A lot of people have been asking, so I dedicate this post to it. Would do better with pictures, I know.
Cheek Mic (what some of my younger cast call the "you've got a parasitic infection" look.) If there is nothing unusual, like glasses, bushy sideburns, a hat, a mask to get around, this is where and how it goes;
Feel for the cheekbone -- the zygomatic. It starts at the hairline at roughly the lower margin of the eyes, and for the first centimeter or two makes a line that points towards the philtrum (the space between upper lip and nose). Jaw muscles originate just below this bony prominence; press a finger against your own cheek and make a chewing motion and you will feel how on the cheek, you have movement, but on top of the cheekbone, the flesh remains almost still.
Starting with the microphone under the shirt or blouse and coming up through the neck hole in back, pull the mic over the top of the ear and stretch it along the zygomatic -- just on top or slightly below in the notch. It should be along the same line as the bone, making a fairly straight line as it points to the margin of the upper lip. Avoid the temptation to angle it lower.
Pull the mic out until there is barely one width of tape between the head of the mic and the start of the hairline (aka the sideburns). Tape there. For younger cast I buy 1/2" tape or tear the 1" in half. For women and children, you can usually brush aside much of the stray hair in front of the ear to make sure you are not putting tape on top of hair.
So that's four things to watch out for; don't pull the mic out too far, don't tip the mic down or otherwise allow it to get on the soft part of the cheek, don't get tape on the head of the mic, and don't tape on top of hair (it is uncomfortable for the actor and doesn't stay on, anyhow).
On most actors, dress the mic behind the ear and tape once behind the ear; when the space is large and clear, actually behind the ear a bit above the lobe -- I've found a narrow strip of tape done at an angle works well -- and when the space is small or there is a lot of hair, just below the ear on the broad mass of the sternomastoid itself.
For actresses with lush hair (particularly girls) you can save them tape behind the ear and use a bobby pin or hair clip right where the hair tucks over the back of the ear.
The last piece of tape in the typical three-piece arrangement is on the back of the neck. I used to recommend low, around the 7th cervical vertebrae, but I've changed that now to a 3/4 position, along the mass of the trapezius and just above the "V" where shoulder line meets neck line.
Okay, I've given a bunch of exceptions here already, but really, for twenty actors you can go through 18 of them with the basic three pieces of tape, slap slap slap. I've done a cast of twenty myself in under fifteen minutes.
Hair Mic. What one of my younger cast called the "angler fish" look. Also when done wrong can look like a caste mark. Seriously, there's not enough sonic difference between just down of the hairline, and inside the hairline, to make it worth staring at a microphone all night.
The mic goes on the forehead. If the actor has hair with an off-center part, this may give you a better place to lead it, otherwise just go center. Tape just behind the head of the mic, and as close to hairline as you can get...if you have to tape. For most actors, it is better to pull the mic up until it is just barely peeking out, and secure it with bobby pins or hair clips.
Work the mic up along the top of the head and back, pinning as you go. The slowest to dress are actors in natural hair. With wigs, you either have a wig cap, or the actor's own hair in coils, and it is easy to pin to or weave the mic inside.
Particularly, girls with wigs or "trousers" roles will have the bulk of their hair pinned up in a bun or french roll. You can pull the mic through that and let it dangle in back. Then all you need to do is pin the length up to where it meets the hairline in front.
When the hair is not supportive of the fragile neck area, this will be a piece of tape.
Hair mics take longer, and take more experience and judgment in figuring out how best to deal with each individual actor. The trade-off is that they, of course, sound better.
Lapel Mic. Completely inappropriate for most live theater, but you may have to do it for a presenter or work a lecture or talk some time.
No tape. The mic goes into a clip, which clips to clothing. The trick is to get out of chin shadow; don't go on to a high collar. As a rule of thumb, feel for the top of the dagger-bone -- below the clavicular notch. Or the other rule of thumb...imagine the microphone is a little light, and it should touch the lips without the chin casting a shadow on them. In most cases it looks nicer on clothing to be to one side or the other, on the inside edge of the lapel on a sports coat or similar.
In the case of, say, a turtleneck sweater, make a judgment call about whether you'd prefer to be watching a puckered sweater with a mic attached in the middle of the fabric (the thicker, looser weave, and more colorful the sweater, the better this works), or listen to a poor voice from a position that is up too high.
Friday, November 15, 2013
If you ain't picking seams, you ain't learning
I guess that means I'm learning.
This week has been my first serious project on the Bernina. A pair of pants. And, as it turns out, the scale-up is almost perfect here. I would have been over my head with a frock coat, and probably bored with another pillow case. On pants, I'm learning.
Learning, among other things, that when people say Simplicity patterns tend large (and their 1948 very much runs large), they aren't kidding. Using Simplicity's own mapping of pattern size to measurements, and a fresh set of measurements I took off my own body...I ended up with a waist about 4" too large!
Seriously, the things were clown pants. And isn't it always the way, that the seam you have to unpick is the seam you made right after you switched from "machine baste" setting to something tighter?
I don't have a good feel yet for whether this is a simpler pattern or a more complicated pattern for what it is. I do know I basically had to just build it end to end; I couldn't make head nor tail out of the instructions and the many, many pieces until I was actually stitching them together. And not always then, either -- pulled apart the pockets two or three times before I figured out how they were supposed to work.
Now that I understand this pattern, there's several things I'd do different. There's no reason to put interfacing in the fly, for instance, although the overlap could sure use some. And there's some of the basting and marking steps I could cut now. Biggest lesson so far, though? Measure your seam allowance. Having a clean seam allowance is just too critical to too many other stages to make it worth being sloppy cutting it.
Also discovered black is painful to work with. Finally gave up on the stupid tailor's chalk and switched to white grease pencil, which I could actually see. It is a heavy, relatively coarse-weave "tent canvas" I'm using that frays a lot and is basically a total pain. The bolt of fabric I carried to the front was a yard short, and this was my hasty second choice.
And my little travel iron actually puts out enough for fusible interfacing. I think I bought the thing back when I was in the Army. It goes back to at least 1986 -- but then, so does my coffee filter.
Since learning one new thing at a time has never been my way, I also took my first class at TechShop this week. I'm now permitted to use the cold saw...and more powerful versions of tools I have myself. Many more classes before I'll be able to mill any metal..especially if I want to CNC it.
This week has been my first serious project on the Bernina. A pair of pants. And, as it turns out, the scale-up is almost perfect here. I would have been over my head with a frock coat, and probably bored with another pillow case. On pants, I'm learning.
Learning, among other things, that when people say Simplicity patterns tend large (and their 1948 very much runs large), they aren't kidding. Using Simplicity's own mapping of pattern size to measurements, and a fresh set of measurements I took off my own body...I ended up with a waist about 4" too large!
Seriously, the things were clown pants. And isn't it always the way, that the seam you have to unpick is the seam you made right after you switched from "machine baste" setting to something tighter?
I don't have a good feel yet for whether this is a simpler pattern or a more complicated pattern for what it is. I do know I basically had to just build it end to end; I couldn't make head nor tail out of the instructions and the many, many pieces until I was actually stitching them together. And not always then, either -- pulled apart the pockets two or three times before I figured out how they were supposed to work.
Now that I understand this pattern, there's several things I'd do different. There's no reason to put interfacing in the fly, for instance, although the overlap could sure use some. And there's some of the basting and marking steps I could cut now. Biggest lesson so far, though? Measure your seam allowance. Having a clean seam allowance is just too critical to too many other stages to make it worth being sloppy cutting it.
Also discovered black is painful to work with. Finally gave up on the stupid tailor's chalk and switched to white grease pencil, which I could actually see. It is a heavy, relatively coarse-weave "tent canvas" I'm using that frays a lot and is basically a total pain. The bolt of fabric I carried to the front was a yard short, and this was my hasty second choice.
And my little travel iron actually puts out enough for fusible interfacing. I think I bought the thing back when I was in the Army. It goes back to at least 1986 -- but then, so does my coffee filter.
Since learning one new thing at a time has never been my way, I also took my first class at TechShop this week. I'm now permitted to use the cold saw...and more powerful versions of tools I have myself. Many more classes before I'll be able to mill any metal..especially if I want to CNC it.
Monday, November 11, 2013
The Problem of Backline Contamination
Sound levels are relative....to a point.
Within this simple phrase lies the reason of why backline contamination is such a huge problem for live sound in smaller venues.
First, consider the setting for which amplified sound was first introduced; the big open-air concert. Or, similar in affect but looking completely different, the studio session.
Everything that reaches ears comes through the mixing board. It is pretty much that simple. The musicians play and sing, a selection of microphones (and pickups) take those elements that are deemed essential to create the desired sound, those signals are processed to taste and mixed together, and the final result is broadcast from line arrays...or is compressed for streaming or cut into a master disk or whatever.
And perhaps this gives rise to a problematic philosophy. Sound engineers and designers tend to come from this world of control. There was a point reached in studio sessions when each musician was isolated in a sound-proof booth, unable to see the other players, unable to hear anything but what the engineers sent to his or her headphones. Thankfully, most studios have backed off from that, embracing the interaction and life -- and moving to a philosophy that treats the ensemble as the primary source and spot-mics only to bring out nuance in individual instruments.
But we still have this lovely illusion that, since sound is passing through the mixer, we should be able to control what is heard by the audience electronically. And this just plain isn't so.
In a small house, theatrically-trained actors are heard easily without amplification. So are singers...the only problem can be if the accompaniment is overpowering. Which it can be. Un-amplified, brass, drums, and even piano can be enough louder than even a trained voice to make the result unbalanced.
The problems become even greater in the medium-sized house. Through the range from club-like to 2,000 seats, a significant part of what reaches the audience's ears did not come through the sound system.
Levels are relative. It is as appropriate to say "The band is too loud" as it is to say "The singers are too soft." The problem is, there exists an apparently simple solution to the latter. So the approach in the majority of spaces is to try to deal with the problem by amplifying the singers -- usually via wireless microphones.
In the right situations, all that is required is gentle reinforcement. The microphones near-invisibly add a few more dB, and the singers rise above the accompaniment in a natural way. The experience is acoustic; the sound appears to come from the singer and interacts in a natural way with their surroundings, supporting a sense of reality.
The same measures can be taken when a band is not balancing with itself. In many cases the traps will overpower some of the reeds. And often as not there are keyboards, or electric bass, which don't make significant sound without electronics.
My preference is to treat a pit acoustically; for every instrument playing in the pit to be heard in the pit. Keyboard players have monitor speakers that are turned up enough for the other players to hear them. This allows the pit to adjust to each other and act like an ensemble.
This doesn't work so well for having to amplify some of the elements over others. And it confuses many people tremendously when you do something like mic a drum. Because a "drum" isn't an entity. It produces a variety of sounds that, to sound right and sound real, also have to balance with each other. In short, the drum is so loud you can't hear the drum. So I mic the drum to be able to hear it over the drum.
(Or more specifically, I mic to hear the nuances of the snare and the click of the hat -- sounds which get masked by the volume coming off the shells).
And this gives rise to the perception of panacea, in which every single note you will get from anyone in the production will be, "So and so's microphone needs to be louder." Always louder. Never trimming the competing elements. Never understanding that loud is relative, and that making the chorus softer is a better way to allow the solo to be heard.
Because sound is relative, to a point. The point being there are soft edges pushing up into concrete ceilings. As you raise levels, you approach feedback threshold. Far short of actual feedback, sounds will begin to take on an edgy, brittle shimmer, like they are being played through one of those tin-can-and-string telephones.
And you can push the feedback threshold back through judicious equalization. The problem being that you begin to cut into the sound you want.
Even if you avoid feedback, the room itself has acoustic properties. First you begin to drive the air of the room into resonance. Then all the materials in the room begin to vibrate in sympathy. All of these physical effects generate harmonics of their own. As you increase the level of Sound A higher and higher in the speakers in order to make it louder than Sound B, you also produce a Sound C; the room itself. The louder you go, the louder the room is, until all of these secondary sounds are as much competition as the original problem you were trying to solve.
Even in a perfect room, with a perfect system...say, if you gave each audience member a pair of personal headphones, physics still does not allow you to arbitrarily increase volume. Physics -- and biology. The human ear is non-linear, and begins to distort at higher sound pressures. The ear accommodates quickly; what was nice and loud two minutes ago sounds normal now, and ten minutes later begins to sound wimpy and soft. The ear in fact begins to shut down after sufficient exposure to higher levels of sound. First the high end rolls off, meaning everything sounds dull, then the perceived volume drops.
No-one ever wins in volume wars.
So what does this have to do with the backline?
The problem is simple. The leakage from the pit -- loud acoustic instruments like brass and drums, and the monitor levels of keys and bass -- is heard by the audience. As a mixer, you are trapped between two absolutes; the highest practical level you can amplify any sound, and the existing sound that is in competition.
Backline leakage is a problem in almost every way. First, it is sheer volume. Weak singers may not be able to be heard over the natural, un-amplified sound coming out of the pit. Second, it is unbalanced; the backline emphasizes certain instruments at the expense of others. Third, it has a poor spectrum.
This takes a little more explanation. Sound is semi-directional. For a given radiator, the pattern approaches omnidirectional as the frequency lowers. Frequency dependence also counts in reflection; given the scattered surfaces of a typical sunken orchestra pit, the higher frequency content bounces around and gets lost, with less of it escaping the pit. The lower-frequency content treats obstructions like a river treats a small rock; it flows around, and escapes the pit rather less attenuated.
This should be simple to understand. It ever boggles my mind why even many musical directors don't get it. The sound of a band on stage is like a friend across from you at a table. The sound of a band in a pit is like the sound of your friend on the other side of a door. And it isn't made better by asking the friend to talk louder!
This is why, for any situation but the smallest or most open, a pit band won't sound its best without a small amount of carefully selected amplification. Not to make them LOUD. But to make them CLEAR.
Given this, the amplified sound of the band is up against...the leakage from the pit. Just like trying to power up singers over the band via wireless microphones, you are trying to power up the "good" sound (the softer instruments, the nuances of specific instruments, the higher frequencies and other subtleties of performance) above the low-frequency, time-smeared, unfocused mush that makes up most of the backline leakage.
Again, this isn't something the band can do themselves. If you hit the drum louder, the "click" of the stick gets louder, but so does the "thooooummmp" of the shell. Because hearing is non-linear and increased volume can lead to increased resolution you will get a slightly more defined drum sound if you just increase the player's volume. But it isn't anywhere as nice as the amplified sound that selectively takes just one element of the sonic picture and presents it to the audience without any of the filters of the local geography between the drummer and the audience ears.
And bands, too, drive the rooms. The louder they play, the more the set walls, the other instruments, the air itself vibrates in sympathy. All these extraneous and distracting noises get louder and louder as well -- and in a non-linear fashion.
This is why backline leakage is the bane of sound techs in every medium-sized and smaller venue. In clubs, it is near-impossible to fix a band's sound via external electronics. If the guitarist insists on turning up his cab, then loud guitar will be all anyone hears -- the rest of the band might as well go home.
In the theater, in the pit, it isn't quite as dire. But the basic simplicity remains; if the band plays loud, if their monitors are loud, then the sound will suck.
Because the mixer is up against the concrete wall of sonic maximums. When the band is loud, it leaks into the very microphones that are on the singers. I've had plenty of shows in which bringing up the chorus was exactly as if you'd turned up the band 5-10 db. There are times when the drums are so loud they are -- quite literally -- louder in the singer's microphone than the singer is. You would get the singer "louder" (relatively, that is), only by turning them down.
To get the singers to sound decent you need to support them over the total sound of the band. To get the band to sound decent, you need to support them over the distracting leakage from the pit. And you have an absolute limit as to how hot you can run.
Really, it would be better if the band could be more controlled. But that is something that does not seem to happen.
The Loneliest Seat in the House
As a mixer for a musical you are a bit of an alien to the theater. All of the other jobs -- from dressers to follow-spot operators -- are well established in the history of theater but amplification and live sound mixing are still new to the trade. We are more from the world of live music, from concerts and clubs, then we are from the world of greasepaint and limelight.
And you are physically isolated. Which you share with the lighting tech, and often the Stage Manager -- but they have headsets linking them electronically to the rest of the production. In the long spaces between cues there is chatter on headset -- news and gossip from backstage, and the social grease of people working long hours together.
They also have a nice little booth to hide in; you are usually alone on the floor in full view of the audience.
Of course it goes without saying that the Stage Manager has the ultimate loneliness; the loneliness of Command (insert your favorite Captain Kirk scene here). Our responsibility is not as heavy, but it is no small weight in itself.
We are the final link between actors/musicians and the ears of the audience. Some times this makes you the mastering engineer; the person responsible with taking all that effort and heart that so many people put into the music and giving that final polish to make it the best it can be. Other times you are like the last driver with a clear chance to avoid the accident.
And you switch between these modes with blinding speed. At one moment, you will be gently riding a mic to put that last little bit into the crescendo of an emotional number. And then there is a screech of sound and in an instant you are in damage control mode, force to make a choice between multiple unpalatable alternatives...without any time for deliberation.
On a very good night, someone might give you an atta-boy for responding quickly to plug that popped out of a DI in the pit and subjected the audience to the growling buzz of unfiltered 60-cycle. On a very, very good night, you might get a compliment along the lines of, "We didn't hear any noise or popping this time." No matter how many problems you solve before the curtain opens, no matter how many prophylactic measures you take (like subjecting a poor actor to multiple mic changes just because you thought you heard something in their mic), no matter how quick and how effective you fixed, charted around, or otherwise ameliorated a problem, the only feedback you will ever get is on the ones that slip through.
And you are physically isolated. Which you share with the lighting tech, and often the Stage Manager -- but they have headsets linking them electronically to the rest of the production. In the long spaces between cues there is chatter on headset -- news and gossip from backstage, and the social grease of people working long hours together.
They also have a nice little booth to hide in; you are usually alone on the floor in full view of the audience.
Of course it goes without saying that the Stage Manager has the ultimate loneliness; the loneliness of Command (insert your favorite Captain Kirk scene here). Our responsibility is not as heavy, but it is no small weight in itself.
We are the final link between actors/musicians and the ears of the audience. Some times this makes you the mastering engineer; the person responsible with taking all that effort and heart that so many people put into the music and giving that final polish to make it the best it can be. Other times you are like the last driver with a clear chance to avoid the accident.
And you switch between these modes with blinding speed. At one moment, you will be gently riding a mic to put that last little bit into the crescendo of an emotional number. And then there is a screech of sound and in an instant you are in damage control mode, force to make a choice between multiple unpalatable alternatives...without any time for deliberation.
On a very good night, someone might give you an atta-boy for responding quickly to plug that popped out of a DI in the pit and subjected the audience to the growling buzz of unfiltered 60-cycle. On a very, very good night, you might get a compliment along the lines of, "We didn't hear any noise or popping this time." No matter how many problems you solve before the curtain opens, no matter how many prophylactic measures you take (like subjecting a poor actor to multiple mic changes just because you thought you heard something in their mic), no matter how quick and how effective you fixed, charted around, or otherwise ameliorated a problem, the only feedback you will ever get is on the ones that slip through.
Thursday, November 7, 2013
Here We Go a-Morrowing
So the V150 model is finished and in my Shapeways store.
Here's how it looks with a coat of paint and a few bits of additional dressing:
More notes on scale; these are old Morrow Project miniatures from the 90's, thus the Ral Partha Dwarf proportions. Technically 28mm, and as you can see, they seem roughly proportional with a vehicle in 1/56 scale. At least, it is as close as I could get to 1/56 by working with the quoted length of the hull, from the blueprints I had available.
To recap the scale process: I scanned blueprint images and cropped and scaled them to be square and dimensional to each other. I took the pixel length of the largest scaled item that appeared in any one drawing and extrapolated the real-world dimensions of the blueprint space.
Within Carrara, I set the working box to the size of the blueprint space; this meant that if the model I was building was lined up accurately on the vehicle in the drawing, it would be the correct real-world size. This worked out, to within a small degree of error (a fraction of one percent error).
The two biggest problems I had within Carrara were, first, that I was working metric while most of the dimensional information was in feet and inches. So a lot of multiplying by 2.54 to get the right units into the modeler. The other is that Carrara, stupidly, only displays two digits to the right of the decimal point. This means that a vehicle sitting within a ten-meter working box can not have any numerical measurement that is smaller than 10 centimeters.
Which is ridiculous! Any of the detailed parts, then, could only be lined up by eye against a grid (which could be set finer than 10 cm). Once again, it is really stupid software for anyone doing a model more elaborate than the Linux penguin.
The drawback of the method is that when I moved into checking for printability I had to divide by 56 all the time to find out what the print size of various parts was going to be. Finally I just reset the grid to be at 1 mm in the final print size of the model (aka 56 mm in world scale), and eyeballed everything to make sure I was staying within the Design Rules.
Since I knew the longest dimension of the completed model in real-world scale, all I had to do is divide by 56 to figure out what the size of the scaled mesh should be. The actual export from Carrara was at arbitrary scale (Carrara doesn't do scaled obj format). But all I had to do is type the correct longest dimension in the scale box in Hexagon 2.5, and the stl exported from there was correctly scaled for the Shapeways printers.
The last scale trick was to line up all critical-fit parts the same way they would be when assembled (as the printers aren't always the same degree of accuracy in x, y, and z axis), and export them together (to make sure they are all scaled the same ratio and will fit properly after printing). In this case, I attached the different parts together with sprue to make it easier for the lads and lasses at Shapeways to handle what otherwise might be small, fragile parts.
Here's how it looks with a coat of paint and a few bits of additional dressing:
More notes on scale; these are old Morrow Project miniatures from the 90's, thus the Ral Partha Dwarf proportions. Technically 28mm, and as you can see, they seem roughly proportional with a vehicle in 1/56 scale. At least, it is as close as I could get to 1/56 by working with the quoted length of the hull, from the blueprints I had available.
To recap the scale process: I scanned blueprint images and cropped and scaled them to be square and dimensional to each other. I took the pixel length of the largest scaled item that appeared in any one drawing and extrapolated the real-world dimensions of the blueprint space.
Within Carrara, I set the working box to the size of the blueprint space; this meant that if the model I was building was lined up accurately on the vehicle in the drawing, it would be the correct real-world size. This worked out, to within a small degree of error (a fraction of one percent error).
The two biggest problems I had within Carrara were, first, that I was working metric while most of the dimensional information was in feet and inches. So a lot of multiplying by 2.54 to get the right units into the modeler. The other is that Carrara, stupidly, only displays two digits to the right of the decimal point. This means that a vehicle sitting within a ten-meter working box can not have any numerical measurement that is smaller than 10 centimeters.
Which is ridiculous! Any of the detailed parts, then, could only be lined up by eye against a grid (which could be set finer than 10 cm). Once again, it is really stupid software for anyone doing a model more elaborate than the Linux penguin.
The drawback of the method is that when I moved into checking for printability I had to divide by 56 all the time to find out what the print size of various parts was going to be. Finally I just reset the grid to be at 1 mm in the final print size of the model (aka 56 mm in world scale), and eyeballed everything to make sure I was staying within the Design Rules.
Since I knew the longest dimension of the completed model in real-world scale, all I had to do is divide by 56 to figure out what the size of the scaled mesh should be. The actual export from Carrara was at arbitrary scale (Carrara doesn't do scaled obj format). But all I had to do is type the correct longest dimension in the scale box in Hexagon 2.5, and the stl exported from there was correctly scaled for the Shapeways printers.
The last scale trick was to line up all critical-fit parts the same way they would be when assembled (as the printers aren't always the same degree of accuracy in x, y, and z axis), and export them together (to make sure they are all scaled the same ratio and will fit properly after printing). In this case, I attached the different parts together with sprue to make it easier for the lads and lasses at Shapeways to handle what otherwise might be small, fragile parts.
Monday, November 4, 2013
Eating Like a Horse...
....a cup of rolled oats and an apple. That was breakfast one day of tech. Lunch was a Cliff Bar. Dinner was a little better; two servings of salad and the last Top Ramen (actually, Ichiban.)
At that time, out of the last five people who had promised me a check, none had paid up. The show I had just finished working reneged on the stipend the outgoing Production Manager had promised in his original email, and on the extra hourly the outgoing Artistic Director had promised verbally after I spent the first week of my "design" tracing, testing, and replacing wires all over the building.
That said, the next two checks are not late, per se. One is rental for a run that was extended an extra week or two. The other is a gig that sometimes pays on the day, and sometimes pays a few weeks later when the school gets around to dealing with it. Unfortunately on that gig (the East Bay Mini Makers Faire) I had to hire an assistant and I paid HIM. Out of my own pocket.
There is no petty cash drawer at the company where I'm currently in production. I spent money on microphone parts out of pocket under the understanding that I would be reimbursed promptly. Two days ago, they finally coughed up the check. Which was one day too late to save me from seventy dollars worth of overdraft charges.
Saturday I had in hand the check for designing the show. Fortunately I calculate on a per diem, not on an hourly, but $425 for two weeks work is still pretty damned shy. Calculating by hourly....two weeks of 10-14 hour days makes for a base pay rate somewhere around $4 an hour. And to add the last insult to that injury, the check of course was handed to me during a twelve hour day at the theater. By the time I could make it to the bank, I'd been hit with yet another fee (this time from my gym).
At this company, mixing the show is treated separately, and gets paid a very decent stipend. On closing night. So there will be a whole lot of rolled oats in the weeks to come.
Oddly enough, I felt pretty good for most of the week. I guess I needed the diet.
At that time, out of the last five people who had promised me a check, none had paid up. The show I had just finished working reneged on the stipend the outgoing Production Manager had promised in his original email, and on the extra hourly the outgoing Artistic Director had promised verbally after I spent the first week of my "design" tracing, testing, and replacing wires all over the building.
That said, the next two checks are not late, per se. One is rental for a run that was extended an extra week or two. The other is a gig that sometimes pays on the day, and sometimes pays a few weeks later when the school gets around to dealing with it. Unfortunately on that gig (the East Bay Mini Makers Faire) I had to hire an assistant and I paid HIM. Out of my own pocket.
There is no petty cash drawer at the company where I'm currently in production. I spent money on microphone parts out of pocket under the understanding that I would be reimbursed promptly. Two days ago, they finally coughed up the check. Which was one day too late to save me from seventy dollars worth of overdraft charges.
Saturday I had in hand the check for designing the show. Fortunately I calculate on a per diem, not on an hourly, but $425 for two weeks work is still pretty damned shy. Calculating by hourly....two weeks of 10-14 hour days makes for a base pay rate somewhere around $4 an hour. And to add the last insult to that injury, the check of course was handed to me during a twelve hour day at the theater. By the time I could make it to the bank, I'd been hit with yet another fee (this time from my gym).
At this company, mixing the show is treated separately, and gets paid a very decent stipend. On closing night. So there will be a whole lot of rolled oats in the weeks to come.
Oddly enough, I felt pretty good for most of the week. I guess I needed the diet.
It's Like Another World
I just opened and am in middle of mixing "A Little Princess," the 2004 musical developed at Theaterworks with hopes of making it to Broadway. My personal feeling is it will be a while before that happens. Twin story-lines and a surfeit of (sometimes unfortunately similar) songs cloud the underlying development and emotional arcs; what it feels like too often is a mere string of scenes, with no particular reason as to why one scene follows on another. The current production is colorful and energetic, at least, so it is a decent night's entertainment.
(Image -- courtesy of TBA -- has nothing to do with this production, but at least is a place where I have worked.)
Technically the show has many challenges. I'm still struggling to define the "sound" for the show, which is being unveiled only slowly as we finish solving issues of monitors, band balance, off-stage singer placement, and RF interference. I'll get into those, and lessons learned: but probably in another post. For the moment I'll say only that this company, like many theater companies, has trouble accommodating the "music" part of "musical." Music is constrained and degraded by choices across the productions, from poor pit placement to limited rehearsal time.
(And at this company in particular, FOH is thought of as a trade, not an art. It is considered as something that could be done by the numbers by anyone with nimble fingers and sufficiently detailed notes from the director. The kinds of real-time artistic choice (and compromise) you have to make whilst flying the desk in front of an audience...well, even conceiving the world in which this is part of the job seems beyond their reach.)
On the Effects Design side, as a passing note this is the most synth-free show I've done yet. The only sound of purely electronic origin that appears is the "sound" of the hot desert sun just before "Timbuktu Delirium." All the magical spot effects are instrumental samples (and a wind sample); rainstick, bamboo rattle...and an mbira brought back from Tanzania and played by my own clumsy thumbs.
But it is specifically effects design I am thinking hard about right now. I want to split the position again. I did three or four seasons with a co-designer; I engineered and set the "sound" of the show, he designed -- created and spotted and fine-tuned -- the effects.
I enjoy creating sounds. I enjoy it very much, and it is one of the things that brought me into theater in the first place. But I have some minor skill as a sound engineer and FOH, and that is a rarer skill in this environment. We can find someone else to create sound effects easier than we can find someone else to engineer and mix the show.
(Actually, I think it might be best for this particular company if I left completely. Because maybe someone younger and better able to express themselves would be able to break down some of the institutional barriers and move sound in that theater up to the next level.)
(The risk, as in many such technical artistic fields, is that it would be just as likely for them to find someone without the appropriate skills, and for sound to suck in such a way that it drives audiences away and drives talent away without anyone involved being able to specifically articulate that it is because the sound sucks.)
(There's a common argument made; that some elements of technical art -- color balance in lighting, system EQ in sound, period accuracy in architectural details -- are, "Stuff only you experts notice." That most of the audience will be just as happy with crap or wrong crap. I strenuously disagree.)
(If you put a dress of the wrong period on stage, no audience member will leap to their feat and say, "That bustle is 1889, not 1848!" But they will have -- even a majority of the audience will have -- a slight uncomfortable feeling, an itch they can't scratch, a strange sound coming from an empty room; a sense that Something is Not Right. And it will make their experience less than it could be. They may never write on the back of the feedback card, "The reverb tails were too long and disrupted some of the harmonies," but they will write things like, "The music could have been better.")
(Many audience members, and a disheartening number of production staff and management, have no idea of 9/10ths of what my board does. But when it all works correctly, the tech-weenie stuff us FOH talk between each other our own indecipherable tongue brings out results that are easy to put in plain language and easily heard by most ears; sound that is "pleasing, well-balanced, audible, clean dialog, exciting, full, etc. etc.")
But back to the subject.
Thing is, on a straight play the Sound Designer is almost entirely concerned with Effects. They can sit in rehearsal with a notebook, spotting sounds and transitions and underscores, taking timing notes, even recording bits of dialog or action in order to time the effect properly. During tech, they are out in the audience area where they can hear how the sound plays; and relay those discoveries back to the electricians and board operators as to volume and placement.
On a musical, you are also trying to deal with the band, their reinforcement, monitor needs for band and actors, and of course those dratted wireless microphones. And far too many of the effects are going to happen when there are already a dozen things happening that demand your attention.
In my current house, two other factors make the job nearly impossible. The first is that due to budget we have carved down from up to four people on the job (Mic Wrangler, Board Mixer, Designer, and Sound Assistant -- during the load-in only), to....one.
I am repairing the microphones, personally taping them on actors and running fresh batteries back stage, installing all of the speakers and microphones and other gear, tuning the house system, helping the band set up, mixing the band, mixing the actors...and also all the stuff that has to do with effects.
The other factor at the current house is short tech weeks and a very....er...flexible...approach to creativity. We feel it is important to celebrate and sustain all those flashes of inspiration that come even in the middle of a busy tech with only hours left before the first audience arrives.
In other houses, we go into lock down earlier. Only when an idea is clearly not working do we stop and swap out -- and even then, it is understood by all parties that this will have a serious impact on every department and thus is not undertaken lightly.
When scenes are being re-blocked up to minutes before the doors open for the opening night audience, the idea of being able to set an effect early in tech, stick it in the book, and not have to come back to it, well...
This can be done. I built my first shows on reel-to-reel decks, bouncing tracks multiple times to build up layered effects. Modern technology means we can be very, very nimble. But it is getting increasingly difficult to be this nimble on top the musical needs of the show. This is why I want to split the job.
Two of the technical tools I've been relying on more of late are within QLab. One is the ability to assign hot keys. This way, even if I've incorrectly spotted the places a recurrent effect has to happen, I can still catch it on the fly by hitting the appropriate key on the laptop during performance.
The other is groups and sound groups.
Most effects you create will have multiple layers. Something as simple as a gunshot will be "sweetened" with something for a beefier impact (a kick drum sample works nice for some), and a bit of a reverb tail.
But just as relative frequency sensitivity is dependent on volume, dynamic resolution is also volume-dependent. And with the human ear, these are time-mapped and spacial-mapped as well; the ear that has most recently heard a loud, high-pitched sound will hear the next sound that comes along as low and muffled.
Which means that no matter how much time you spend in studio, or even in the theater during tech setting the relative levels of the different elements of your compound sound, when you hit the full production with amplified singers and pit and the shuffling of actor feet (and the not-inconsiderable effect of meat-in-seats when the preview audience is added to the mix!) your balance will fall apart. The gun will sound like it is all drum. Or too thin. Or like it was fired in a cave. Or completely dry.
So instead, you split out the layers of the sound as stems and play them all together in QLab. Grouped like this, a single fade or stop cue will work on all the cues within the group. And you can adjust the level of the different elements without having to go back into ProTools (or whatever the DAW of your choice is).
This also of course gives flexibility for those inevitable last-minute Director's notes ("Could the sound be two seconds shorter, and is that a dog barking? I don't like the dog.")
(How I construct these, is first I get the complete sound to sound right, preferably on the actual speakers. Then I selectively mute groups of tracks and render each stem individually. Placed in a group and played back together, the original mix is reconstructed. The art comes, of course, in figuring which elements need to have discrete stems!)
(QLab will play "close enough" to synchronized when the files are short enough. For longer files, consider rendering as multi-channel WAV files and adjusting the playback volumes on a per-channel basis instead. I tried this out during "Suessical!" but it didn't seem worth the bother for most effects.)
Thing of it is, though...
My conception of sound design is one of total design. Even though it takes the pressure off to have a sound assistant running sound cues (or on simpler shows, the Stage Manager), I consider it part of the total sound picture of the show. To me, the background winds in this show are as much a part of the musical mix as is the high-hat. As a minimum, I'm tracking effects through the board so I have moment-by-moment control of their level and how they sit in the final mix. It is sort of a poor man's, done-live rendition of the dubbing mix stage of a movie.
The avenue that continues to beckon has been the idea that, somehow, I could do the bulk of the work offline and before we hit the hectic days of tech. This has always before foundered on not having the props, the blocking, or any of the parts I usually depend on to tell me what a sound should actually be (and to discover the all-important timing.)
Since this won't move, it is possible that what I have to explore is more generic kinds of sounds. I've been using QLab and other forms of non-linear playback for a while to make it possible to "breathe" the timing. Perhaps I can explore more taking development out of the effects and building them instead into the playback.
Except, of course, that is largely just moving the problem; creating a situation where I need the time to note and adjust the cue'ing of build sound effects, as opposed to doing the same adjustment on the audio files themselves. And in the pressure of a show like the one I just opened, I don't even have the opportunity to scribble a note about a necessary change in timing or volume!
And the more the "beats" of an effect sequence are manually triggered, the more I need that second operator in order to work them whilst still mixing the rest of the show. There's one sequence in this show -- the nearly-exploding boiler -- that has eight cues playing in a little over one script page. There are already several moments in this show where I simply can not reach the "go" button for a sound cue at the same time I need to bring up the chorus mics.
Perhaps the best avenue to explore, then, is generic cues; sounds so blah and non-specific they can be played at the wrong times and the wrong volumes without hurting the show! Which is the best argument for synth-based sounds I know...
(The other alternative is to make it the band's problem. But they are already juggling multiple keyboards and percussion toys and several laptop computers of their own and even a full pedal-board and unless it appears in the score, they are not going to do a sound effect!)
(Image -- courtesy of TBA -- has nothing to do with this production, but at least is a place where I have worked.)
Technically the show has many challenges. I'm still struggling to define the "sound" for the show, which is being unveiled only slowly as we finish solving issues of monitors, band balance, off-stage singer placement, and RF interference. I'll get into those, and lessons learned: but probably in another post. For the moment I'll say only that this company, like many theater companies, has trouble accommodating the "music" part of "musical." Music is constrained and degraded by choices across the productions, from poor pit placement to limited rehearsal time.
(And at this company in particular, FOH is thought of as a trade, not an art. It is considered as something that could be done by the numbers by anyone with nimble fingers and sufficiently detailed notes from the director. The kinds of real-time artistic choice (and compromise) you have to make whilst flying the desk in front of an audience...well, even conceiving the world in which this is part of the job seems beyond their reach.)
On the Effects Design side, as a passing note this is the most synth-free show I've done yet. The only sound of purely electronic origin that appears is the "sound" of the hot desert sun just before "Timbuktu Delirium." All the magical spot effects are instrumental samples (and a wind sample); rainstick, bamboo rattle...and an mbira brought back from Tanzania and played by my own clumsy thumbs.
But it is specifically effects design I am thinking hard about right now. I want to split the position again. I did three or four seasons with a co-designer; I engineered and set the "sound" of the show, he designed -- created and spotted and fine-tuned -- the effects.
I enjoy creating sounds. I enjoy it very much, and it is one of the things that brought me into theater in the first place. But I have some minor skill as a sound engineer and FOH, and that is a rarer skill in this environment. We can find someone else to create sound effects easier than we can find someone else to engineer and mix the show.
(Actually, I think it might be best for this particular company if I left completely. Because maybe someone younger and better able to express themselves would be able to break down some of the institutional barriers and move sound in that theater up to the next level.)
(The risk, as in many such technical artistic fields, is that it would be just as likely for them to find someone without the appropriate skills, and for sound to suck in such a way that it drives audiences away and drives talent away without anyone involved being able to specifically articulate that it is because the sound sucks.)
(There's a common argument made; that some elements of technical art -- color balance in lighting, system EQ in sound, period accuracy in architectural details -- are, "Stuff only you experts notice." That most of the audience will be just as happy with crap or wrong crap. I strenuously disagree.)
(If you put a dress of the wrong period on stage, no audience member will leap to their feat and say, "That bustle is 1889, not 1848!" But they will have -- even a majority of the audience will have -- a slight uncomfortable feeling, an itch they can't scratch, a strange sound coming from an empty room; a sense that Something is Not Right. And it will make their experience less than it could be. They may never write on the back of the feedback card, "The reverb tails were too long and disrupted some of the harmonies," but they will write things like, "The music could have been better.")
(Many audience members, and a disheartening number of production staff and management, have no idea of 9/10ths of what my board does. But when it all works correctly, the tech-weenie stuff us FOH talk between each other our own indecipherable tongue brings out results that are easy to put in plain language and easily heard by most ears; sound that is "pleasing, well-balanced, audible, clean dialog, exciting, full, etc. etc.")
But back to the subject.
Thing is, on a straight play the Sound Designer is almost entirely concerned with Effects. They can sit in rehearsal with a notebook, spotting sounds and transitions and underscores, taking timing notes, even recording bits of dialog or action in order to time the effect properly. During tech, they are out in the audience area where they can hear how the sound plays; and relay those discoveries back to the electricians and board operators as to volume and placement.
On a musical, you are also trying to deal with the band, their reinforcement, monitor needs for band and actors, and of course those dratted wireless microphones. And far too many of the effects are going to happen when there are already a dozen things happening that demand your attention.
In my current house, two other factors make the job nearly impossible. The first is that due to budget we have carved down from up to four people on the job (Mic Wrangler, Board Mixer, Designer, and Sound Assistant -- during the load-in only), to....one.
I am repairing the microphones, personally taping them on actors and running fresh batteries back stage, installing all of the speakers and microphones and other gear, tuning the house system, helping the band set up, mixing the band, mixing the actors...and also all the stuff that has to do with effects.
The other factor at the current house is short tech weeks and a very....er...flexible...approach to creativity. We feel it is important to celebrate and sustain all those flashes of inspiration that come even in the middle of a busy tech with only hours left before the first audience arrives.
In other houses, we go into lock down earlier. Only when an idea is clearly not working do we stop and swap out -- and even then, it is understood by all parties that this will have a serious impact on every department and thus is not undertaken lightly.
When scenes are being re-blocked up to minutes before the doors open for the opening night audience, the idea of being able to set an effect early in tech, stick it in the book, and not have to come back to it, well...
This can be done. I built my first shows on reel-to-reel decks, bouncing tracks multiple times to build up layered effects. Modern technology means we can be very, very nimble. But it is getting increasingly difficult to be this nimble on top the musical needs of the show. This is why I want to split the job.
Two of the technical tools I've been relying on more of late are within QLab. One is the ability to assign hot keys. This way, even if I've incorrectly spotted the places a recurrent effect has to happen, I can still catch it on the fly by hitting the appropriate key on the laptop during performance.
The other is groups and sound groups.
Most effects you create will have multiple layers. Something as simple as a gunshot will be "sweetened" with something for a beefier impact (a kick drum sample works nice for some), and a bit of a reverb tail.
But just as relative frequency sensitivity is dependent on volume, dynamic resolution is also volume-dependent. And with the human ear, these are time-mapped and spacial-mapped as well; the ear that has most recently heard a loud, high-pitched sound will hear the next sound that comes along as low and muffled.
Which means that no matter how much time you spend in studio, or even in the theater during tech setting the relative levels of the different elements of your compound sound, when you hit the full production with amplified singers and pit and the shuffling of actor feet (and the not-inconsiderable effect of meat-in-seats when the preview audience is added to the mix!) your balance will fall apart. The gun will sound like it is all drum. Or too thin. Or like it was fired in a cave. Or completely dry.
So instead, you split out the layers of the sound as stems and play them all together in QLab. Grouped like this, a single fade or stop cue will work on all the cues within the group. And you can adjust the level of the different elements without having to go back into ProTools (or whatever the DAW of your choice is).
This also of course gives flexibility for those inevitable last-minute Director's notes ("Could the sound be two seconds shorter, and is that a dog barking? I don't like the dog.")
(How I construct these, is first I get the complete sound to sound right, preferably on the actual speakers. Then I selectively mute groups of tracks and render each stem individually. Placed in a group and played back together, the original mix is reconstructed. The art comes, of course, in figuring which elements need to have discrete stems!)
(QLab will play "close enough" to synchronized when the files are short enough. For longer files, consider rendering as multi-channel WAV files and adjusting the playback volumes on a per-channel basis instead. I tried this out during "Suessical!" but it didn't seem worth the bother for most effects.)
Thing of it is, though...
My conception of sound design is one of total design. Even though it takes the pressure off to have a sound assistant running sound cues (or on simpler shows, the Stage Manager), I consider it part of the total sound picture of the show. To me, the background winds in this show are as much a part of the musical mix as is the high-hat. As a minimum, I'm tracking effects through the board so I have moment-by-moment control of their level and how they sit in the final mix. It is sort of a poor man's, done-live rendition of the dubbing mix stage of a movie.
The avenue that continues to beckon has been the idea that, somehow, I could do the bulk of the work offline and before we hit the hectic days of tech. This has always before foundered on not having the props, the blocking, or any of the parts I usually depend on to tell me what a sound should actually be (and to discover the all-important timing.)
Since this won't move, it is possible that what I have to explore is more generic kinds of sounds. I've been using QLab and other forms of non-linear playback for a while to make it possible to "breathe" the timing. Perhaps I can explore more taking development out of the effects and building them instead into the playback.
Except, of course, that is largely just moving the problem; creating a situation where I need the time to note and adjust the cue'ing of build sound effects, as opposed to doing the same adjustment on the audio files themselves. And in the pressure of a show like the one I just opened, I don't even have the opportunity to scribble a note about a necessary change in timing or volume!
And the more the "beats" of an effect sequence are manually triggered, the more I need that second operator in order to work them whilst still mixing the rest of the show. There's one sequence in this show -- the nearly-exploding boiler -- that has eight cues playing in a little over one script page. There are already several moments in this show where I simply can not reach the "go" button for a sound cue at the same time I need to bring up the chorus mics.
Perhaps the best avenue to explore, then, is generic cues; sounds so blah and non-specific they can be played at the wrong times and the wrong volumes without hurting the show! Which is the best argument for synth-based sounds I know...
(The other alternative is to make it the band's problem. But they are already juggling multiple keyboards and percussion toys and several laptop computers of their own and even a full pedal-board and unless it appears in the score, they are not going to do a sound effect!)
Monday, October 21, 2013
Some Day My Prints Will Come
...And they did.
So the box from Shapeways arrived today. Cost of the model with shipping; about fifty bucks (using the sintered Nylon-3 they call "White Soft Flexible.")
Cut the pieces from the sprue with diagonal cutters, and tried a rough assembly. All the parts fit, and there wasn't any significant warpage.
As it turns out, I needn't have worried about the fit of the "socket" on the wheels; at that scale, it is going to be eyeball and a blob of glue anyhow.
And the turret is plenty generous. I might even shrink that tolerance a little.
For reference, this is a "hero" render of the actual model (note; this is with all edges beveled to reveal the actual polygons better. Ordinary renders would smooth out the curves instead.)
So what is next for this model? Well, a few minor modifications to improve the print -- which is currently being offered for sale at my Shapeways store.
And the Poser version, which is taking a long time; I had to throw out most of the hull thickness so as to permit working vision blocks and opening doors. And although the details are a bit too fine to print properly in the material of choice, they are not quite fine enough for the Poserverse -- I need to replace hinges and latches with more detailed ones, and the vision blocks need to be completely rebuilt. Not to mention, you know, interior detail!
So the box from Shapeways arrived today. Cost of the model with shipping; about fifty bucks (using the sintered Nylon-3 they call "White Soft Flexible.")
Cut the pieces from the sprue with diagonal cutters, and tried a rough assembly. All the parts fit, and there wasn't any significant warpage.
As it turns out, I needn't have worried about the fit of the "socket" on the wheels; at that scale, it is going to be eyeball and a blob of glue anyhow.
And the turret is plenty generous. I might even shrink that tolerance a little.
And here it is, rough-fit (omitted the undercarriage and just balanced it on the wheels instead.) As you can see, some details got dropped/filled in. The only really objectionable part, though, is the stair-stepping on the rear of the hull. This is inevitable when combining a gently sloped surface with the 0.12mm print head motion.
For reference, this is a "hero" render of the actual model (note; this is with all edges beveled to reveal the actual polygons better. Ordinary renders would smooth out the curves instead.)
So what is next for this model? Well, a few minor modifications to improve the print -- which is currently being offered for sale at my Shapeways store.
And the Poser version, which is taking a long time; I had to throw out most of the hull thickness so as to permit working vision blocks and opening doors. And although the details are a bit too fine to print properly in the material of choice, they are not quite fine enough for the Poserverse -- I need to replace hinges and latches with more detailed ones, and the vision blocks need to be completely rebuilt. Not to mention, you know, interior detail!
Tuesday, October 8, 2013
How to Ring a Prop Phone from QLab
Opening Remarks
There are new plays being written every day, but so many of the plays in our repertoire are older (if not actual old chestnuts.) Between the aging subscriber base and the desire for familiar pleasures, you can be sure that in most theaters you work at, you will be doing "Charlie's Aunt" at some point.
Which means that although we've moved on in our own lives to men without hats, jackets without vests, carriages without horses, lighting without gas, freezers without ice delivery, in fully half the repertoire older ways and older technology are part of the action. There are few plays yet in which appear an iPad or a tweet -- but many in which a telephone has to ring. And I mean ring -- not a ringtone, but the good old electric clapper that was part of our lives for almost eighty years.
Theater sound design is changing as well. I am tempted to say it might be less realistic, less diagetic, but fuller and more complex. But that might just be the companies I've tended to work for. Result being, you are more likely today to play a sound effect off of digital media and through speakers, and less likely to make use of the storehouse of theater tradition with its crash boxes, starter pistols, thunder runs and, yes, phone ringers.
In any case, making a phone ring is an instructive problem. One word used around the Arduino community is "Physical Computing." Or, as Tom Igoe puts it, Making Things Talk. And that is the problem of getting software in the virtual world to do something out here in meatspace.
(How bizarre is it that Chrome's built-in spellcheck flags "diagetic" but not "meatspace?")
And thus, here is how I got an actual period piece of telecommunications to go through its paces once again under software control.
Physical Layer
I got lucky. I happen to own a 1U rackmount module that puts out Bell standard voltage and ring signal (90 VAC at 20 Hz, super-imposed on a 48v DC offset). This has been the standard almost since Alexander Graham spilled acid on his trousers (prompting him to call out, famously, "Watson, come here.") The theater also owns a 90V 30 Hz machine (British standard).
There some cheesy ways to do this. The craziest and most dangerous is to half-rectify wall voltage. You then get a sort of pulse DC at approximately 60 volts and 40 Hz. The next step up in cludge is to use a step-down transformer to bring wall voltage close to 48 volts, then switch it on and off again through relays driven by an oscillator at 20 Hz. This works better, although it lacks the DC offset.
Better yet is step-up schemes, because these can operate from the safety of batteries or the isolation of a wall-wart power supply. But this is not the moment to go into those (perhaps later I'll build one of my own from scratch, and document that.)
Since I had the module, all I needed is a relay to switch it. Since it is an AC signal, I am running it through a relay. Some reading affords that the ring signal is probably under half a watt, and this puts it within the range of a PC-mount relay. I was lucky enough to find one at the Rat Shack with a coil voltage of only 5V (12V is a lot more common for relays.)
Since even that coil is a bit too much heavy lifting for an Arduino, a power darlington -- the old standby the TIP-120 -- is running it. A resistor between Arduino output and darlington just for extra protection. And, also; when a relay or solenoid is switched off, the collapsing magnetic field produces a transient voltage of inverse polarity to what was applied. A diode is soldered backwards across the coil of the relay for just this reason; the transient bleeds off through the diode instead of attacking the transistor.
This is quick-and-dirty electronics as well as temporary, so Arduino is fine, with an Arduino proto shield to hang the wires on. (This is a bare board with Arduino matching headers on it; they have them at SparkFun, Adafruit, and the other usual suspects. I particularly like the one from Modern Devices myself.)
(The button you see taped to the desk is a back-up, wired in parallel.)
Software Layer
The chain of software starts in QLab, with a combination of MIDI cues, GOTO and RESET cues to set the cadence of the ring. (New picture, showing some of the MidiPipe window as well as the Processing ap's window.)
To detail a little; the Phone On and Phone Off cues send a MIDI note event. On is a MIDI "NoteOn" event, and Off is, well, a NoteOff. These are MIDI cues, which you need to unlock with the MIDI license for QLab (which has gotten quite a bit costlier since the Version 1 pictured here, sorry!)
Both cues are in a group cue so they fire together automatically. The pre-delay set for the Phone Off cue means it waits for 1.5 seconds before it actually fires. After an even longer pre-delay, the GOTO cue sends us back to the top of the sound group again. The actual standard US cadence is 2 seconds on, 4 seconds off. I picked a faster cadence -- and it works perfectly with the action.
The entire group cue was dropped on the RESET cue. Which is inside a second group. This group has a noteOff event in case the loop was in the middle of a ring when the RESET cue was hit, and a sound cue. So it kills the GOTO, stops all the MIDI cues, fires off a second noteOff to make sure the phone stops ringing, and then plays the answering machine message.
The next step in the software chain is Processing, which receives the MIDI event sent from QLab and sends a serial message to the Arduino:
Above is the window for the Processing ap (compiled as a stand-alone). As you can see, it gives no options for selecting the correct ports; those identities are hard-coded. The display text exists only to confirm everything is working correctly.
(There is also MIDIPipe working here because Processing wouldn't recognize QLab as a viable MIDI source).
The key functions here are at the bottom; the NoteOn and NoteOff function are from themidibus library; they are called automatically if the appropriate event shows up at the designated MIDI port. When each function is called, a single ASCII character is output to the selected serial device (the associated Arduino).
The rest of this is boiler-plate; list the available ports, pick a port, instantiate a MIDI object from the themidibus class.
The last stage to the software chain is the code loaded on the Arduino itself:
Even simpler code here (and a lot of it is leftover cruft from a different project and doesn't actually do anything here).
We're using the hardware serial port and the Arduino serial library. It simply checks on every program loop if there is a character waiting on the serial port. If I had saved to a string, I'd need to flush that string on detect, but in this case it just has whatever character is present (usually "null.")
When the right character shows, the relay and a blinkenlight are activated. Since the outputs toggle, they remain in the state set until the loop is presented with the appropriate serial signal to turn them off again.
I added a button to perform on-board testing and over-ride the Processing end, but never got around to coding it.
Labels:
arduino,
electronics,
how-to,
MIDI,
Processing,
Qlab,
sound,
theater
Subscribe to:
Posts (Atom)