Yes, it is another Tomb Raider 2013 rant. I wouldn't be doing this, really, if there wasn't so much to like about the game...
Anyhow, responding to the complaint of a strong ludonarrative dissonance, one of the developers retorted that you didn't have to kill every Solarii you encountered. You could "just bypass them."
So can you? Playing up through the Radio Tower, the totals were roughly one live body left behind for every person Lara was forced to kill. Discounting the large set-piece battles/ambushes, where it was necessary to kill everyone in order for the game to advance.
1) Deer. You have to kill a deer to trigger the "go back to the campsite with meat and get a radio call from Roth" event. I shot a couple rabbits and a crow, but no, has to be a deer. However, you can leave all of Bambi's little friends unharmed (really, one deer carcass should feed her for the rest of her island stay!) 0/0
2) Wolves. After Mathias leaves with Sam, you get stuck in a bear trap. Three wolves will charge and the only way through the semi-quicktime is to kill all three. 0/0 humans, 3/3 wolves.
3) More wolves. The game really thinks you will fight them. It puts you on a bridge with clear lanes of fire and three more bundles of fresh arrows at your feet. Instead I scramble-ran all the way up the hillside to trigger the cutscene with Whitman. Got bit a couple times but super-healing power, right? 0/0 humans, 3/5 wolves (counting only combat encounters).
4) Vladimir. Killed in a quicktime. You kill him or get killed. That's your only bullet, so bypassing the other Solarii in the scene is not a moral choice. 1/1
5) Two guys in front of a door. This is the scene I thought I could win. You are trapped in a courtyard between a fire that springs up the moment you enter, and a door that requires you pry at it with your hand axe. And...no go. The guys discover you in cutscene so you must fight them. I tried pushing them over. Tried shooting them in the knee. But no matter what I tried, the moment I started prying at the door someone would shoot a flaming arrow into it and Lara would drop the axe. 3/3
Yeah, the game is making a big point here with you getting over your first kill, being totally cornered, and choosing to fight back. So it is not a ludonarrative disconnect per se...although it is a little odd that exactly two minutes after throwing up, you are gritting your teeth and shooting people again. Plus, both headshots and finishing moves are already active by this point.
6) The Silent Kill. The game really, really, really wants you to stealth kill this one guy. It gives you fresh arrows and a cart to hide behind. And guess what? Movement keys are locked. Weapon switch is locked. It's pretty much another damned QTE; all you can do is track and fire. And if you shoot at so much as his big toe, he falls over dead. 4/4
7) Two more guys at foot of cliff. They spend a while facing each other, then one walks over towards you and will discover you. And even if you ran past them -- the rope ladder that gets you up the cliff isn't unrolled until you start shooting. If you stealth kill (what the game wants) the rope ladder guy comes down for the hell of it anyway. Regardless, you can't climb the ladder until they stop shooting at you, meaning once again, TPK. 7/7
8) The not-fireproof hut. The game has decided you are okay with shooting people in the back with a bow and wants to teach you a new trick now; to do it up close and personal. There are two bad guys with their backs to you and the game flashes up instructions on how to garrote them with your bow. Then you are supposed to get discovered by four other guys including a molotov-thrower who will set fire to the building you are fighting in. (The same move, a little later in the game, you upgrade to burying your ice axe in the back of their neck). Well, I was having none of that. I jumped over the first guy, scramble-dodged the second, took a couple flaming arrows in the butt scrambling up to the next level, scramble-ran across that while everyone was yelling and shooting, and hopped on the zipline out of there. Success! (Assuming you don't count own-goals from Mr. Firebug.) 7/13
9) Wolf in cave. Killed in a QTE. Still 7/13, and wolves are 4/6.
10) Men at top of cliff. Well, what do you know? If you wait long enough, they say "Let's go inside out of the rain," and they do. And you can actually stroll right through the camp without them seeing you. (Not only that, but there's more conversation triggered if you do so; always a bonus with the clever writing and nice voice acting). Another four lives saved, and these ones didn't finish up the encounter setting fire to themselves. 7/16
11) Men on trail before Broken Tunnel. These ones are not designed to by bypassed. If you use the zipline, they trigger. If you chose to jump, you are injured but heal...and they trigger the moment you walk any closer. All three have cover and one of them is throwing molotovs to flush you out. Hell with it; I was on easy setting; I scramble-ran right up the trail, shoved them out of the way, got hit a few times with arrows and one axe but survived long enough to wriggle into the crack to the next level. And they VO's comment on this "That's too narrow for us...we'll have to go around." The developers think of everything! 7/20.
12) Broken Tunnel. The way this is staged, you are supposed to fight your way from cover to cover against the six mooks in the courtyard, under constant fire from the machine gun, then fight your way up the stairwell to take the gunner from behind. The smart way to do it is to snipe the gunner, then silent kill all the mooks on the ground one by one.
I took the third path: I ran like hell. Rolled a lot and zig-zagged from cover to cover right through the whole mess. Now, later in the game you have to do this against Dmitri, so you know it is possible. This early on, no-one is going to break cover under fire from six guys and a heavy machine gun. Went right up the stairs, pushed the two there out of my way, jumped to the tower and had to struggle in that narrow walkway to push the other two down just long enough to climb up the ledge. They kept shooting and shooting but scenery blocked their bullets. 7/31
But the developers are still lying here. I'm using the easy setting, abusing the dodge mechanic, and relying on AI stupidity to do things that also break the ludonarrative dissonance. Lara may be, at this point in the game, reluctant to kill, but that does not in any way translate to "Cheerful about running right at a heavy machine gun!" And, no...there is no alternate path, no climb high above their heads, no concealment you can use to make a long torturous sneak. There is either combat and kill, or take an insane risk.
13) Inside the bunker, two mooks are wrestling with a big drum painted bright red and anyone who ever played a game in their life knows what they are supposed to do here. After you've killed them, another will pop out of a ceiling hatch. Instead I run, run, run, shove shove shove run run run. 7/34
14) The gas trap. The only way to advance is to light off the gas. And that fatally injures the mook with the Type 100. You can mercy kill him if you like, but irregardless, you set off a gas explosion in his face. 8/35
15) Ambush! Here the numbers go bad. The game gave you a light machine gun. It auto-selects to force it into your hands, goes into slow-mo, and glues your feet to the ground while three guys close on you with axes. On any mode other than easy, you pull the trigger or you die. I was on easy mode. I waited until the slo-mo ended, then jumped up to the balcony. Someone was already up there shooting at me so I pushed him off. And he died. OSHA is right, apparently. You can jump up and down this balcony all day, but if they fall off it, they die. And, well, stymied.
The iron door to the next room is opened by the reinforcements, and reinforcements won't come until all but two of the mooks are dead. At least I could make a partial sop to my desire to not engage in combat with them; I ran around like mad, jumping up to the balcony, pushing people off it, jumping back to the floor when it got too busy up there. Actually managed to kill about six of them without firing a shot.
The door finally opened, but the next door had the same latch problem; if you approach the door to lever it open, someone sets fire to it and Lara drops the axe. Even if there is just one guy left, and he is disabled, a flaming arrow will appear. You have to kill every last one. The only possible sop to your conscious is a couple are bomb-throwers, and you can maneuver them into own-kills. Twelve more kills; 20/47
16) Guy on the bridge. Killed in a QTE. And, no, you can't climb over the truck or anything; the only path forward is through the cutscene. 21/48
17) The bunker in front of the tower is already alerted. Now, it is actually possible to run past all of them and into the second courtyard. But the doors won't open until they send the "big guy" out, and he won't come out until you are down to two or less alive in the entire freaking complex (minus the idiot who shows up with a molotov only after everyone else is dead. He is, at least, surprised to find you alive). I lost track a little at this point. There's two at the short tower, three or four in the tower filled with handy red drums of bang, another four that spring up after you've encountered the drum people, three or so snipers in the last building, plus the first of the guys who carries a safe door around with him all shift. I gave up and shot one of the drums, then ran around the courtyard like mad trying to taunt the big guy out. Shot people in the knees, but it was so crowded down there I accidentally triggered a finishing kill animation, and then we just said the heck with it. The only person Lara didn't kill was the idiot with the molotov. Call it 12 kills; 33/60
So, yes, by the time you've reached the radio tower, you've killed over half the people you've met. And those that survived are still shooting at you; there are a grand total of three Solarii still sitting comfortably in their huts at the top of a waterfall, out of the rain, unaware of how close they came to encountering Wolverine.
I tried to go a little further with this increasingly quixotic quest. I found one loophole; there's an ambush right after you get the flaming arrows and once again you are glued to the ground with an arrow notched and will die if you don't immolate three guys. But after that, you can actually run away from the entire ambush; they eventually say "Let the guys at the gate take care of her," or words to that effect.
So that's another twenty or so who you can spare, although the gate does not allow such mercy; twenty dead there. Then the tower Grim is holed up in; you must complete that fight and kill at least a dozen. And another dozen when you've fought your way around to take it from a different angle. There is about eight on the windmill you can bypass, though. Can't sneak past them; once again, the game triggers them automatically (in fact, the one time I crawled all the way around, they actually teleported in right in front of my eyes). You might confuse or reset them by going into the nearby tomb. But if you run around like a mad weasel long enough to confuse them properly, you can jump on to the tramway and leave them behind. Three or four machine gun bullets in the back are nothing to Lara -- not on easy mode, at least.
So for those next encounters, you are forced to kill roughly 3/5 of the people you encounter. And these are larger encounters, too; the blood on your hands is up over a hundred by the time you get out of the geothermal caverns. This isn't player choice; this is built into the level design.
(Yes...the Solarii revival meeting can almost be bypassed; stealth-kill two guards, run like hell to the gas valve, blow it up to open the next door...and the dozen or more you ran past die screaming in the resulting eruption and flames. Oh, well. At this point, one needs to take notice that since you set fire to their building, at least fifty die in the flames; you see some two dozen blasted by gouts of fire, crushed by flaming debris, or falling off a roof in flames. You hear a lot more voices screaming in the distance. Still, even counting only direct player kills, you leave over a hundred bodies in your wake.)
So, no. The ludonarrative dissonance is alive and well. Me, I have my own head-cannon. Especially since I have the out-of-game knowledge that the first tortured sacrifice you see strung up is another student, a member of your expedition, and a friend, I pretty much decided my Lara went straight from fear to murderous rage. The moment she got a weapon, she was happy to kill every single cultist on that island. With an axe. Close-up.
Tricks of the trade, discussion of design principles, and musings and rants about theater from a working theater technician/designer.
Tuesday, November 25, 2014
Monday, November 24, 2014
The Reverberant Field
Graph the strength of your singers versus the volume in the orchestra pit. For any shared acoustic space, there is a point where your show becomes unmixable.
Which is to say, nothing you do with microphones and speakers will restore the proper balance. And it's worse than that. Sound reinforcement for music -- much less a musical -- is not about winning the volume wars, it is about achieving intelligibility. To follow the story, the audience must be able to understand the words. And that's much tougher to achieve.A large part of the reason is the reverberant field. When sound enters a space, it reflects. When it has reflected several times, over a period of up to several seconds, it becomes what we call the reverb tail. But there's several very interesting things about this tail. Since it has bounced back and forth from all available surfaces, it is essentially anisotropic and omnidirectional. Since it is composed of reflections from the last second or several of material, it contains no identifiable transients. And since high frequencies are more easily absorbed, it is weighted towards the low end.
What all this means is the acoustic remnant left after the vocal energy of singers, musical instruments, and sound reinforcement speakers has entered the space is a muddy, boomy, mush. Referencing only the reverberant field, a string bass is just "Ooommooommooom"; no pluck, no notes, no rhythm. For vocals, you are lucky to make out the basic vowels; consonants are lost.
And it gets worse.
As you add volume, whether directly from louder singers or instruments or by turning up microphones, you "pump" the space. The reverberant field becomes stronger in a non-linear way, becoming closer to a continuous wall of white noise. And certain frequencies are pumped as if in a laser cavity; room nodes emerge, standing waves, tones that are captured and amplified and sustained.
(There's a few other side effects; fixtures and architectural elements rattle, adding more noise, and the audience becomes noisier as well.)
Since the reverberant field is practically by definition at equal volume throughout the acoustic space, the direct sound forms a ratio with it. Put a singer at one end of the room. Close by the singer, their voice is louder. By the time you get to the back of the room, the reverberant sound dominates.
And this has huge implications for sound reinforcement.
FOH mixing in any space smaller than a stadium is all about selective reinforcement. Take a small-scale example; a recorder and trumpet duet. In a small space, you might put a mic on the recorder and feed it into the house system to help it match the trumpet's volume.
Problem is, your house speakers aren't on stage. So if you stand in front of a speaker, you get just recorder. Stand in front of the stage, you get just trumpet. Only a little ways back in the hall do you get equal amounts of trumpet and recorder.
Take another typical example; bass player with their own amp. From the audience, all they hear is "oomoomoom." So you take a DI off the bass and emphasize the attack a little. This means two streams have entered the acoustic space:
"Ooomooomooom"
"Tac! Dac! Dac!"
And at the proper distance, they blend into the proper "Toom doom da doom doom da doom." But anywhere other than down the alley where direct projection from stage is roughly matched to direct tweeter radiation from speakers, you get something other than a good bass sound.
Go back to the first. One alternative is to mic both instruments. So from anywhere within range of the speakers, you are hearing both amplified recorder and amplified trumpet. Plus the reverberant field. And therein lies the catch. If the trumpet player, freed from the responsibility of blending with the recorder, increases in volume, the reverberant field increases non-linearly. And as you increase the speaker volume to compensate, the reinforcement is also fighting its own reverberant field.
Why is it non-linear? Well, air has mass. It takes a certain amount of energy to overcome the inertia of the air and set up standing waves. On the listener end, perception starts being non-linear; it takes ten times the power to be perceived as twice the volume, but this is also frequency dependent and the frequency response changes at different volumes. At higher volumes, masking effects in the cochlea become more dominant, up through the onset of hearing fatigue; short-term hearing loss.
Which all work out to progressive loss of perception of the high frequency content. As you crank higher and higher, the desired sound; the harmonics that define the character and timbre, the higher-frequency sounds that define the attack transients, and of course the very transients themselves get lost in the smear of reflected, bottom-heavy, indistinct noise that makes up your reverberant field.
But good luck getting it fixed. If you ask for the trumpet to perhaps play a little softer, all you will get is well-meaning suggestions to turn his mic down. Which, of course, devolves us back to the first set-up; where the people directly in front of the speaker hear the recorder and only a couple of seats get a decent balance.
(Only you've lost that, too, since the reinforced recorder is now masking itself; the transients are lost in the echoes of the reverberant field, and the characteristic timbre is masked in the thrumming low end you've filled the hall with. And of course the trumpet is also lost in the reverberant field now, meaning you might as well have two fretless basses up there for all the character and melodic content that comes through).
Consider what selective reinforcement means for design of a musical. From the back of the house, the reverberant field overwhelms anything coming directly from the stage. From the back of the house, then, it becomes clear you need to roll off the low end and push the presence in the vocal microphones.
Which if done right can provide a pleasing voice in which it is generally possible to make out the words.
But move closer to the main speakers. If you stare down the throat of the speakers, you are getting just the reinforcement; an overly bright, tinny, compressed voice with lots of artifacts. Not pleasant to listen to!
In many seats towards the front of the listening space, some direct bleed comes off the stage and mixes with what's coming off the speakers, plus of course you have the reverberant field. So low end is filled in by the field, mid-range and localization cues come directly from the singer, and presence and definition and consonants are coming off the speakers. And if the speakers are time-aligned properly and not too far away in the horizontal plane (the human ear gives more of a pass to misalignment in the vertical plane) you get a pleasing voice.
Of course, the reverberant field is delayed. That's what it is; reflections from the distant walls. So that smears the lower end. And, yes, it is impossible to align your mains perfectly for all seats; a little geometry will demonstrate that! But here it gets even trickier; the human perceptual apparatus will bring in all the frequency data more-or-less without question, but it will provide perceptual focus based on location, time-of-arrival, and frequency content.
Which boils down to -- the total mix may or may not be pleasing, but it is nearly impossible for untrained ears to sort out which elements are contributing what to the total mix. The reverberant field, particularly, is generally unperceived. The influence that low end leakage has on the volume sensitivity and resulting frequency sensitivity curves in the listening ears will always be underestimated and overlooked.
As that reverberant field rises in ratio (whether due to sheer level, or to increasing distance of the listener from stage and/or speakers) the ability to sort out the desired musical information becomes less. It is like the dark matter of sound at that point; making voices sound muffled and inspiring lots of orders to "turn it up, turn it up!" when the problem is the unseen drag of low-frequency white noise.
And this means the correct emphasis to blend with the acoustic material in the space is different for each seat. And different for each voice. And changes there, too; the singer who needs a lot of help will be emphasized more in speakers, the singer with the strong voice will be almost non-existent in the speakers (depending on where you mix to, of course; whether you mix for the person standing in front of the speaker, the person in the back of the room, or try -- as most of us do -- to achieve a compromise that will work for as much of the audience as possible).
This means if Ethel Merman singes a duet with Little Suzy, the people near the speakers will complain all they hear is a child's screechy voice being amplified way too much, and the people everywhere else will hear nothing but Ethel Merman. This of course evens out the flatter your speaker coverage is!
And it gets subtler than this. Take two singers of similar vocal timbre but different strengths of production. As the reinforcement is balanced for tonal deficiency, they will sound the same in the sweet spot, but one will have an odd, artificial sounding microphone voice right next to the speakers. And that same one will be dull and muffled compared to his partner when listening from distant in the hall.
Or take either of those singers, and change the total volume of the song. At low volumes of reinforcement, the natural voice dominates in the front of the hall and the reverberant field dominates in the rear. At higher levels of reinforcement, the microphone sound dominates over greater and greater parts of the hall. And if the microphone sound is tailored to emphasize needed frequencies and otherwise selectively reinforce the voice, the amplified sound will pass through insufficient, to nicely blended, to artificial as you increase volume.
This is why reinforcement -- working around and supporting the natural acoustic sound from the stage and the orchestra -- is the most difficult kind of sound.
It is also why amplification -- powering over the direct sound with a fuller-range, more accurate picture -- only works within certain boundaries.
And in both cases, you remain at the mercy of backline leakage. No matter what the strategy you pick, if a band continues to turn up their instruments, there will come a point where there is no alternative left for mixing. The show is just going to have to suck.
In other news, I just finished the third weekend of mixing "Poppins."
Saturday, November 22, 2014
King of the Rocketmen
Finally got back to the CAD software:
This is a "flash hider" that sticks on a stock Luger pistol to make a "King of the Rocketmen" prop. When I've finished the model, it will be freely available to print at my Shapeways store...and I'll look at the numbers for lathing one out of aluminium.
And I really don't understand Fusion 360.
Okay, first off, I'm learning CAD. That's an expected learning curve. CAD is a different way of doing things, and I also have been emphasizing organic and poly modeling techniques in the past: CAD leans much more towards parametric methods.
But I'm having a lot of trouble understanding how Fusion 360 is organized. And I'm not alone in this. It seems to have a collection of semi-hierarchically related structures, among them bodies, patches, components, and parts. How they relate, which can contain which, which can be converted into which, is quite unclear. Furthermore, the possible operations change greatly depending on which one you are looking at; even if it is otherwise identical looking and created in an almost identical method.
The software is being rapidly revised, including even the names of parts and functions, and there exists nothing even slightly like a manual. There are videos, but they use terms and show a workspace which is several versions superseded and in many cases no longer applies.
Above that is a bigger question of; "What they heck is this software for?" I have my own suspicions. I think that Autodesk had a bunch of maverick programmers and marketing people; beanie-wearing, espresso-drinking hipsters who were a little too out there for the flagship products. So they put them in a room and told them to go wild with a sort of mad mix of trial balloons, concepts in search of an application, some solid code, and above all an urge to look as trendy as humanly possible.
This means not just the team, but most of the active user base, are people who have way too much experience with Inventor and so forth. Making the software accessible to outsiders doesn't seem like a priority, because they are working in a self-exclusionary echo chamber.
It is fairly telling that the top threads in their own forum are about how to include videos and personalized graphics in one's forum posts.
In a refreshing alternative to most 3d software, what they seem to think of as their selling points are not the modeling functionality, and certainly not gosh-wow render tricks or a suite of included-with-the-download DAZ dollies, but instead the concept of sharing and portability. They believe strongly in this idea, although the implementation seems nebulous (really, now; software that is buggy on a powerful desktop machine, with a graphics-heavy interface, is not going to translate well to a smart phone. And the cloud storage system they have already for file management is crawling-slow, heavy and unwieldy even on a decent work-based connection. Good luck shoving those files down the pipe you will get at a beach-house!)
As far as I can tell, in fact, all the vaunted "sharing via cloud" could be achieved just as well by throwing a save file at DropBox every now and then. But I could be missing the point. Right now whatever it is, it confuses the existing user base more than it accomplishes anything else (lots and lots of forum threads wondering how they can access their own files after a quit and restore!)
In any case, I struggled for days just to lathe the simple shape above. Practically every standard method I tried ended up with unselectable edges or grayed-out "OK" buttons, with no real explanation why. At least one of them crashed the application completely. And what I finally got, violates the spirit of parametric modeling completely in that it is not easily editable. Half the useful functions require turning history off, and the hierarchy of parts doesn't actually seem to go into a final form that includes sub-forms that can be edited.
Navigation is still broken, although I'm using third-party tools to remap and create a middle mouse button, which should help. (For me, 3d aps live or die on navigation. If I can't zip around an object to view it from different angles, I can't build it properly. But the vast majority of 3d aps start now as Windows native, and tend to take such things as three-button mice as standard. Even power Mac users don't get these easily -- nor do they play well with the rest of the Mac environment. But funny thing; an ap as otherwise ghastly as Carrara can implement a smooth 3d navigation and selection and even basic movement rotation and scaling tools with just a couple of control keys. So it isn't that F1-home key-scroll wheel is necessary to make 3d navigation work. It is just that many 3d ap programmers are, well, stupid.)
Oh, yeah. And sad thing is, the tool path tools are poor enough I'm going to end up exporting and generating tool path and so forth in a different ap anyhow. Which means that I might as well have built the thing in a poly modeler to begin with.
And I really don't understand Fusion 360.
Okay, first off, I'm learning CAD. That's an expected learning curve. CAD is a different way of doing things, and I also have been emphasizing organic and poly modeling techniques in the past: CAD leans much more towards parametric methods.
But I'm having a lot of trouble understanding how Fusion 360 is organized. And I'm not alone in this. It seems to have a collection of semi-hierarchically related structures, among them bodies, patches, components, and parts. How they relate, which can contain which, which can be converted into which, is quite unclear. Furthermore, the possible operations change greatly depending on which one you are looking at; even if it is otherwise identical looking and created in an almost identical method.
The software is being rapidly revised, including even the names of parts and functions, and there exists nothing even slightly like a manual. There are videos, but they use terms and show a workspace which is several versions superseded and in many cases no longer applies.
Above that is a bigger question of; "What they heck is this software for?" I have my own suspicions. I think that Autodesk had a bunch of maverick programmers and marketing people; beanie-wearing, espresso-drinking hipsters who were a little too out there for the flagship products. So they put them in a room and told them to go wild with a sort of mad mix of trial balloons, concepts in search of an application, some solid code, and above all an urge to look as trendy as humanly possible.
This means not just the team, but most of the active user base, are people who have way too much experience with Inventor and so forth. Making the software accessible to outsiders doesn't seem like a priority, because they are working in a self-exclusionary echo chamber.
It is fairly telling that the top threads in their own forum are about how to include videos and personalized graphics in one's forum posts.
In a refreshing alternative to most 3d software, what they seem to think of as their selling points are not the modeling functionality, and certainly not gosh-wow render tricks or a suite of included-with-the-download DAZ dollies, but instead the concept of sharing and portability. They believe strongly in this idea, although the implementation seems nebulous (really, now; software that is buggy on a powerful desktop machine, with a graphics-heavy interface, is not going to translate well to a smart phone. And the cloud storage system they have already for file management is crawling-slow, heavy and unwieldy even on a decent work-based connection. Good luck shoving those files down the pipe you will get at a beach-house!)
As far as I can tell, in fact, all the vaunted "sharing via cloud" could be achieved just as well by throwing a save file at DropBox every now and then. But I could be missing the point. Right now whatever it is, it confuses the existing user base more than it accomplishes anything else (lots and lots of forum threads wondering how they can access their own files after a quit and restore!)
In any case, I struggled for days just to lathe the simple shape above. Practically every standard method I tried ended up with unselectable edges or grayed-out "OK" buttons, with no real explanation why. At least one of them crashed the application completely. And what I finally got, violates the spirit of parametric modeling completely in that it is not easily editable. Half the useful functions require turning history off, and the hierarchy of parts doesn't actually seem to go into a final form that includes sub-forms that can be edited.
Navigation is still broken, although I'm using third-party tools to remap and create a middle mouse button, which should help. (For me, 3d aps live or die on navigation. If I can't zip around an object to view it from different angles, I can't build it properly. But the vast majority of 3d aps start now as Windows native, and tend to take such things as three-button mice as standard. Even power Mac users don't get these easily -- nor do they play well with the rest of the Mac environment. But funny thing; an ap as otherwise ghastly as Carrara can implement a smooth 3d navigation and selection and even basic movement rotation and scaling tools with just a couple of control keys. So it isn't that F1-home key-scroll wheel is necessary to make 3d navigation work. It is just that many 3d ap programmers are, well, stupid.)
Oh, yeah. And sad thing is, the tool path tools are poor enough I'm going to end up exporting and generating tool path and so forth in a different ap anyhow. Which means that I might as well have built the thing in a poly modeler to begin with.
Friday, November 21, 2014
Picture-Heavy Post
So that's my view for most of a show.
Well, not really. I sit pretty high, and I make it a point not to let my head get buried in the gear. I just have to glance at the computer every now and then to make sure the right sound cue is loaded. Cues are fired from the MIDI keyboard, with additional wind noises improvised from same. The script is out of frame to the left. The major entrances and exits are all programmed, and scenes are called up with the User-Assignable Buttons there at the far right edge of the LS-9. Unfortunately there aren't enough channels to have body mics, chorus mics, sound effects, effects returns, and band all on the top layer, so I spend a bit of time flipping back and forth between layers.
Anyhow, a lot more spacious than what the poor muso's see during the show:
This is from behind brass land, looking towards the center of the pit. That big piece of nasty-looking fabric is a dust cover for the second keyboard. Barely visible tucked in a corner under the stairs is an old television set connected to a chip camera taped up to the edge of the stage and looking directly at the conductor.
This is looking from about where the MIDI gear of first keyboard sits towards drum land, with the multi-reed seated in a tiny corner right beside the drums.
I'd have the kit properly mic'd, but the drummer is uncooperative; he's playing loud and inconsistently, and he took it on himself to move the kick mic because he thought it sounded bad. I only use the overhead for the show now; the rest of that is all muted.
And just for fun, here's how I'm dressing one of the body packs for the show:
Connector is sealed with heat-shrink and hot glue. Then moleskin is wrapped around the connector, the antenna, the top of the mic, around the element right behind the head, and on the cord just before the connector.
Next, first condom is sealed over the mic with wraps of waterproof tape. Then a second condom is put over that, then the whole thing goes in a mic bag.
This sort of abuse explains why, at least once a run, I have wipe the accumulated goo off the microphones with Goo-Gone, and soak the elements and filter caps in alcohol.
Monday, November 17, 2014
The Engineer's Dilemma
It take a lot of the fun of quitting if there is an infrastructure involved. Because, if you did good work, the new guy can coast for a while with doing less work, spending less money, and being less of a pain in the ass to the powers that be.
Until the lack of maintenance and the added cruft finally brings down the system -- whether it is a machine, a code base, or a properly set up sound system. But by that time they will have long forgotten you, and management can make up other self-serving excuses why they are forced to go back to spending money and taking what seems excess time and unnecessary concessions.
(Another frequent scenario is when the gear is old, management refuses to authorize any upgrades, and you spend way too much of your time in repairs and patches and work-arounds. Until a new guy shows up via social circles and dazzles them with "unlike your old hire, I am a professional." And the first day on the job, they are upstairs complaining; "How can a professional like me work with such outmoded equipment?" And the stuff you wanted for so many years gets bought for the new guy.)
Until the lack of maintenance and the added cruft finally brings down the system -- whether it is a machine, a code base, or a properly set up sound system. But by that time they will have long forgotten you, and management can make up other self-serving excuses why they are forced to go back to spending money and taking what seems excess time and unnecessary concessions.
(Another frequent scenario is when the gear is old, management refuses to authorize any upgrades, and you spend way too much of your time in repairs and patches and work-arounds. Until a new guy shows up via social circles and dazzles them with "unlike your old hire, I am a professional." And the first day on the job, they are upstairs complaining; "How can a professional like me work with such outmoded equipment?" And the stuff you wanted for so many years gets bought for the new guy.)
Saturday, November 15, 2014
Men With Hats
I don't like to wear a hat when mixing because it blocks out some of the sound. Tonight, I mixed the show with the hat on -- because it blocks some of the sound.
Specifically, the way I'm configured now, reverberant sound dominates at the mixing position, but direct sound is prominent in most of the seating (I'm located behind and slightly above the audience, not within them as with a proper FOH position). And the psychoacoustics are drastic; with the reverberant sound in my ears, the mix sounds muffled and boomy. With the reverberant sound partly blocked, the mix sounds clear and possibly over-bright.
It gets even more complex since the band is in a pit under the forestage and being pushed out of stage monitors; their sound has a higher fraction of its energy sent into the reverberant sound of the hall. Which means that using the reverberant sound as a guide, the voices are low in relation to the band. Using just the direct sound as a guide, however, the voices are too loud in relation to the band.
And a hat turns out to be about the right level of partial blockage to give me confidence in the house mix but not steer it in the wrong directions.
I'm also making a conscious effort to keep my head up, looking at the stage. "Everybody look at your hands" is bad advice for an FOH mixer. Dave Rat goes so far as to turn his mixing desk sideways, to further reduce the temptation to bury his head in the knobs and meters. I've got this show programmed and trimmed to where I really don't have to look down very often to confirm I've got the right thing up. And I'm basically done with tweaking EQ, so no need to be staring at the displays during the show, either. Like a lot of recent shows, though, there are so many scenes, I still need to flip script pages occasionally to remind myself of what's coming up.
The eye is one of the stronger guides to localization of sound in an environment with conflicting or confusing sonic clues. Watching the stage helps you hear the show the way the audience does, with your attention pulled into lips and faces and the input of those additional clues to help sort out the sonic mess.
So all to the good...even if it is not, yet, safe to dance.
Specifically, the way I'm configured now, reverberant sound dominates at the mixing position, but direct sound is prominent in most of the seating (I'm located behind and slightly above the audience, not within them as with a proper FOH position). And the psychoacoustics are drastic; with the reverberant sound in my ears, the mix sounds muffled and boomy. With the reverberant sound partly blocked, the mix sounds clear and possibly over-bright.
It gets even more complex since the band is in a pit under the forestage and being pushed out of stage monitors; their sound has a higher fraction of its energy sent into the reverberant sound of the hall. Which means that using the reverberant sound as a guide, the voices are low in relation to the band. Using just the direct sound as a guide, however, the voices are too loud in relation to the band.
And a hat turns out to be about the right level of partial blockage to give me confidence in the house mix but not steer it in the wrong directions.
I'm also making a conscious effort to keep my head up, looking at the stage. "Everybody look at your hands" is bad advice for an FOH mixer. Dave Rat goes so far as to turn his mixing desk sideways, to further reduce the temptation to bury his head in the knobs and meters. I've got this show programmed and trimmed to where I really don't have to look down very often to confirm I've got the right thing up. And I'm basically done with tweaking EQ, so no need to be staring at the displays during the show, either. Like a lot of recent shows, though, there are so many scenes, I still need to flip script pages occasionally to remind myself of what's coming up.
The eye is one of the stronger guides to localization of sound in an environment with conflicting or confusing sonic clues. Watching the stage helps you hear the show the way the audience does, with your attention pulled into lips and faces and the input of those additional clues to help sort out the sonic mess.
So all to the good...even if it is not, yet, safe to dance.
Thursday, November 13, 2014
Hedgehog Renewed
Right, so no more grinding at home -- I'll do that at TechShop. And no more welding with the flux-core wire welder; I'll take the SBU on the MIG at TechShop and use that instead. A good choice anyhow, as it is more convenient, and MIG allows me to weld on aluminium.
But on impulse I took the grinder to the new piece anyhow, just to see how bad it was. And I'm glad I did. Because I've got enough done now for a trial fit-up.
And that's a real confidence booster:
Feels pretty solid already, even though the receiver is still in two pieces, held together only by the fake bolt. And surprisingly heavy -- this is no Rambo one-handed wield (and I say this as a former M60 gunner, too!)
(And, no, this is not and will never be a functional firearm. For one thing, I lack the gunsmithing skills to achieve the tolerances and the quality of welds, much less little details like heat treating. Even less do I have skills necessary to make a fully legal semi-automatic weapon -- as cool as that might be. In short, I could just barely -- if I chose -- make something dangerous to shoot and illegal to own, and I have no intention of going in that direction.)
(933(r) et al gets rather complicated. The gist is that since 1986 no "new" machine gun can be created or imported into the US, and it is the receiver that makes it a machine gun. What I have right now is still, legally, "steel scrap" and non-regulated. But the instant I complete the last weld, I have by law created a machine gun -- unless I take steps first to make sure that it can not be, in the language of the BATF, "easily converted into" a functional weapon. Which language appears to be defined in practice as "...in about eight hours by someone with a fully-equipped machine shop.")
(Since I'm switching to MIG now, I will be able to weld the fake bolt in place. That should satisfy; someone would have to cut it up at least as much as the original demill in order to salvage it -- and at that point it would legally become scrap again.)
(933(r) et al gets rather complicated. The gist is that since 1986 no "new" machine gun can be created or imported into the US, and it is the receiver that makes it a machine gun. What I have right now is still, legally, "steel scrap" and non-regulated. But the instant I complete the last weld, I have by law created a machine gun -- unless I take steps first to make sure that it can not be, in the language of the BATF, "easily converted into" a functional weapon. Which language appears to be defined in practice as "...in about eight hours by someone with a fully-equipped machine shop.")
(Since I'm switching to MIG now, I will be able to weld the fake bolt in place. That should satisfy; someone would have to cut it up at least as much as the original demill in order to salvage it -- and at that point it would legally become scrap again.)
Rack Management
I've been looking at pictures, in the context of some rather snarky articles, about wire management in audio racks. The two racks I'm currently maintaining are far, far, far from that ideal.
The places I work, however, are live theater. We don't set up the same way from show to show. There isn't a standard, there isn't a way of marshaling it all down to a turn-key solution. My racks are untidy because they are works in progress -- even, in progress in the middle of a run.
I've even been tinkering with the house speaker aim points and the processor settings for same. This is due to having to adjust to the sound I am attempting to reinforce itself changing location and character (such as, moving the orchestra into the pit instead of having them onstage or back stage).
At the other venue I support as more-or-less house engineer, we are about to do our first "straight" play after a bunch of musicals. How will we end up changing the new system to better support canned sound effects playback? Perhaps once we've discovered that, I can drag out the clips and wire ties and make something a little more sensible inside that cabinet.
At my primary theater, we're talking about moving the entire rack now. So it's a little early yet to be thinking about nailing things down!
The places I work, however, are live theater. We don't set up the same way from show to show. There isn't a standard, there isn't a way of marshaling it all down to a turn-key solution. My racks are untidy because they are works in progress -- even, in progress in the middle of a run.
I've even been tinkering with the house speaker aim points and the processor settings for same. This is due to having to adjust to the sound I am attempting to reinforce itself changing location and character (such as, moving the orchestra into the pit instead of having them onstage or back stage).
At the other venue I support as more-or-less house engineer, we are about to do our first "straight" play after a bunch of musicals. How will we end up changing the new system to better support canned sound effects playback? Perhaps once we've discovered that, I can drag out the clips and wire ties and make something a little more sensible inside that cabinet.
At my primary theater, we're talking about moving the entire rack now. So it's a little early yet to be thinking about nailing things down!
Conversations with Solarii
Well, not really conversations. Lara hardly says anything to them, and you have no playable options to extend those few brief exchanges.
(Well, practically no. Turns out, if you intentionally prolong the fight on the Endurance's deck, or better yet, let Boris get a couple hits in, there's more dialog triggered. Lara will even yell, "Will you shut up!" at one point.)
In any case, I've been skulking around the island seeing how many extra Solarii I can trigger and, with luck, listen in on their conversations. What makes this happen is thus; the game provides a lot of collectibles. You can make it to the end game without bothering to pick up a single one (the only plot-important items are handed to you in cut scenes). But for whatever reason, the designers decided to implement a system that lets you keep the same saved game but go back to previously crossed areas to search for additional relics.
Because in the standard progression, your path is often broken behind you (forcing you to stay on the tracks of the railroad plot) the way the designers chose is by the somewhat reality-straining Instant Transport function. From any camp, you can immediately visit another camp.
To keep things interesting, when you do this re-visit, there are fresh mooks guarding the goodies. Like all the mooks, they appear to be triggered by you crossing certain points on the map. You can very much search a room thoroughly, then walk across the entrance and spawn a couple mooks in the middle of a long game of chess right where you were searching. And this sort of thing happens more often when you are on the less linear path of trying to collect every last mushroom or coin or whatever.
Because these encounters are generic, the conversations you overhear don't tend to refer to anything in the evolving plot; the fact that the stronghold is in flames, the mountaintop is in the middle of a snowstorm, or that Himiko is dead and the sun has come out for the first time in months (to take a few examples from play). Pity. I'd love to hear what the excellent voice actors might be talking about if they realized they are free to leave the island and their entire cult was a pointless waste.
The highlight so far was listening to the poor Solarii looking at the elevator you broke to get up to the General's tomb and grumbling about how he has to fix everything on this junkyard island. That's an entire mini-scene -- recorded dialog, encounter -- you'd never encounter in strictly linear play. A more spectacular example is if you go back to the first camp with Roth after the rescue plane crashes, you find the plane strewn over the campsite and Solarii busy taking it apart for salvage.
Another weird little break from reality I've noticed. This is a scaling problem. What I mean is, game mechanics often have to satisfy both a single use, and multiple uses, and it is impossible to optimize for both. In the Civ games, it is fascinating to learn about the origins of crop rotation, to send you people out to find and quarry stone and build a road back to your growing town in order to build a granary...but by hour six of the game, with fifty cities to manage and a technology tree longer than your arm, it becomes painful clicking through the long build list and trying to manage tens of idiot workers.
In Tomb Raider, one mechanic is to allow you to rifle bodies for a few bits of extra salvage. Another is to gain small amounts of ammunition in those same searches (which you unlock by purchasing the appropriate skill). Another mechanic is of course limiting maximum ammunition carried so you can't just hose everything. A last minor mechanic is one that limits the number of entities on the map by getting rid of slain entities over a certain number.
The result, however, is that you end up searching the bodies during the fight. Especially if you are trying to cherry-tap, using only the pistol to get through one of the big set-piece battles; the only way to reload is to take a few bullets off dead enemies on the run. Which is pretty credulity-straining as a concept.
Because searching happens over and over in the game, it is made a very brief action (having a five-minute animation played every time would be annoying). Also, the AI has lag. Some of it may be programmed in that way, and some may be a result of a mechanic that says if you are "scrambling" (aka moving around with frequent taps of the "dodge" button) you are harder to hit. So you can pretty much run over to a body and perform a search while under fire; it will take the AI that long to figure out where you went.
These same limitations in the AI also leaves opportunity for a more interesting combat style for some levels. The game really wants you to hunker down among cover and snipe. Instead, I'm having fun running right up among the attackers, getting inside their lines and hewing away with the axe. They get so confused they have trouble figuring out which way you went, and it takes them long enough to switch from ranged weapon fire to hand-to-hand you can usually get crippling blows in before they can react properly.
I did get killed a lot doing this, but it was a lot of fun.
(Well, practically no. Turns out, if you intentionally prolong the fight on the Endurance's deck, or better yet, let Boris get a couple hits in, there's more dialog triggered. Lara will even yell, "Will you shut up!" at one point.)
In any case, I've been skulking around the island seeing how many extra Solarii I can trigger and, with luck, listen in on their conversations. What makes this happen is thus; the game provides a lot of collectibles. You can make it to the end game without bothering to pick up a single one (the only plot-important items are handed to you in cut scenes). But for whatever reason, the designers decided to implement a system that lets you keep the same saved game but go back to previously crossed areas to search for additional relics.
Because in the standard progression, your path is often broken behind you (forcing you to stay on the tracks of the railroad plot) the way the designers chose is by the somewhat reality-straining Instant Transport function. From any camp, you can immediately visit another camp.
To keep things interesting, when you do this re-visit, there are fresh mooks guarding the goodies. Like all the mooks, they appear to be triggered by you crossing certain points on the map. You can very much search a room thoroughly, then walk across the entrance and spawn a couple mooks in the middle of a long game of chess right where you were searching. And this sort of thing happens more often when you are on the less linear path of trying to collect every last mushroom or coin or whatever.
Because these encounters are generic, the conversations you overhear don't tend to refer to anything in the evolving plot; the fact that the stronghold is in flames, the mountaintop is in the middle of a snowstorm, or that Himiko is dead and the sun has come out for the first time in months (to take a few examples from play). Pity. I'd love to hear what the excellent voice actors might be talking about if they realized they are free to leave the island and their entire cult was a pointless waste.
The highlight so far was listening to the poor Solarii looking at the elevator you broke to get up to the General's tomb and grumbling about how he has to fix everything on this junkyard island. That's an entire mini-scene -- recorded dialog, encounter -- you'd never encounter in strictly linear play. A more spectacular example is if you go back to the first camp with Roth after the rescue plane crashes, you find the plane strewn over the campsite and Solarii busy taking it apart for salvage.
Another weird little break from reality I've noticed. This is a scaling problem. What I mean is, game mechanics often have to satisfy both a single use, and multiple uses, and it is impossible to optimize for both. In the Civ games, it is fascinating to learn about the origins of crop rotation, to send you people out to find and quarry stone and build a road back to your growing town in order to build a granary...but by hour six of the game, with fifty cities to manage and a technology tree longer than your arm, it becomes painful clicking through the long build list and trying to manage tens of idiot workers.
In Tomb Raider, one mechanic is to allow you to rifle bodies for a few bits of extra salvage. Another is to gain small amounts of ammunition in those same searches (which you unlock by purchasing the appropriate skill). Another mechanic is of course limiting maximum ammunition carried so you can't just hose everything. A last minor mechanic is one that limits the number of entities on the map by getting rid of slain entities over a certain number.
The result, however, is that you end up searching the bodies during the fight. Especially if you are trying to cherry-tap, using only the pistol to get through one of the big set-piece battles; the only way to reload is to take a few bullets off dead enemies on the run. Which is pretty credulity-straining as a concept.
Because searching happens over and over in the game, it is made a very brief action (having a five-minute animation played every time would be annoying). Also, the AI has lag. Some of it may be programmed in that way, and some may be a result of a mechanic that says if you are "scrambling" (aka moving around with frequent taps of the "dodge" button) you are harder to hit. So you can pretty much run over to a body and perform a search while under fire; it will take the AI that long to figure out where you went.
These same limitations in the AI also leaves opportunity for a more interesting combat style for some levels. The game really wants you to hunker down among cover and snipe. Instead, I'm having fun running right up among the attackers, getting inside their lines and hewing away with the axe. They get so confused they have trouble figuring out which way you went, and it takes them long enough to switch from ranged weapon fire to hand-to-hand you can usually get crippling blows in before they can react properly.
I did get killed a lot doing this, but it was a lot of fun.
Wednesday, November 12, 2014
Alas, Poor Hedgehog
The next weld did not go as well. I still can't see what I'm doing, and still ended up with a lot of voids. Worse, I got a spot of weld metal on the end cap, which worked its way into the threads, and there was nothing for it but to twist the thing off anyhow...stripping the threads in the process. Fortunately, this is just the receiver threads (the end cap is made of sterner stuff. Or at least, it is now, having not been brought to annealing temperature by the original plasma cuts and the more recent welding.)
Anyhow, freeing the cap took tools and time I didn't have at the theater, so I gave up welding for the day and took it home. With the second piece tacked on, I now have enough to properly space the last weld. And it looks like a very large fill involved there (the plasma torch seems to have cut, backed off, then cut again at a slightly different angle, leaving a really large gap). I'm not sure puddling flux-core wire in there is the best move. I'm thinking I really do need to switch to MIG.
The next MIG class is a week from today. Now I just have to figure out how to restart my TechShop membership and still have money left for bills....
Anyhow, freeing the cap took tools and time I didn't have at the theater, so I gave up welding for the day and took it home. With the second piece tacked on, I now have enough to properly space the last weld. And it looks like a very large fill involved there (the plasma torch seems to have cut, backed off, then cut again at a slightly different angle, leaving a really large gap). I'm not sure puddling flux-core wire in there is the best move. I'm thinking I really do need to switch to MIG.
The next MIG class is a week from today. Now I just have to figure out how to restart my TechShop membership and still have money left for bills....
Tuesday, November 11, 2014
You've Got to Grind, Grind Grind at That Hedgehog...
Back to the Suomi. To recap; it started as a demilitarized kit, meaning all original parts except someone at the ATF took a plasma torch to the receiver. I ground off all the slag from the bits of receiver, and machined a jig (in the form of a fake bolt with a barrel plug attached to the front) out of aluminium.
Yesterday, I tried the first welds. One of the theaters I work at has one of those cheap wire-feed welders -- the ones that will plug into a household wall outlet. And I made a nice hedgehog.
(In my defense, besides being rusty, and more used to stick welding anyhow, I didn't have good light and could only really see the arc -- which is not quite enough to really see by.)
(This isn't the full hedgehog glory; I've already knocked off some of the slag and given it a quick wire brush.)
And it seemed to work. I was worried about filling the wide gaps, and worried about the aluminium backing. Well, the latter survived without real issue, and I was able to remove the most-stuck one with a few twists of a wrench. The former, it turned out bridging was relatively easy, but I didn't get as much metal down inside the gaps as I would like.
Flux-core wire is very slaggy stuff, and you tend to get a lot of voids and inclusions with it. I made it a point of puddling until I could see metal glowing (about the only thing I could see!) and that seems to have floated much of the slag to the surface.
And then to grinding -- mostly Dremel so far, plus a whole bunch of hand file work. I expected this to be the worst bit; the plasma torch cut is right across where the lugs mate with the rest of it. And it turned out I'd made one mistake in alignment; I put the piece in about 1/16th of an inch too close, meaning I had negative headspace.
But it made it all the same. Barrel and shroud sit tight and feel solid. Now to do the other two welds!
(Post title is a reference to one of the songs in "Mary Poppins" -- in fact, my favorite in terms of the orchestration. There's just this lovely little period brass oompah going on under that one.)
Yesterday, I tried the first welds. One of the theaters I work at has one of those cheap wire-feed welders -- the ones that will plug into a household wall outlet. And I made a nice hedgehog.
(In my defense, besides being rusty, and more used to stick welding anyhow, I didn't have good light and could only really see the arc -- which is not quite enough to really see by.)
(This isn't the full hedgehog glory; I've already knocked off some of the slag and given it a quick wire brush.)
And it seemed to work. I was worried about filling the wide gaps, and worried about the aluminium backing. Well, the latter survived without real issue, and I was able to remove the most-stuck one with a few twists of a wrench. The former, it turned out bridging was relatively easy, but I didn't get as much metal down inside the gaps as I would like.
Flux-core wire is very slaggy stuff, and you tend to get a lot of voids and inclusions with it. I made it a point of puddling until I could see metal glowing (about the only thing I could see!) and that seems to have floated much of the slag to the surface.
And then to grinding -- mostly Dremel so far, plus a whole bunch of hand file work. I expected this to be the worst bit; the plasma torch cut is right across where the lugs mate with the rest of it. And it turned out I'd made one mistake in alignment; I put the piece in about 1/16th of an inch too close, meaning I had negative headspace.
But it made it all the same. Barrel and shroud sit tight and feel solid. Now to do the other two welds!
(Post title is a reference to one of the songs in "Mary Poppins" -- in fact, my favorite in terms of the orchestration. There's just this lovely little period brass oompah going on under that one.)
Friday, November 7, 2014
The Wheels Come Off
Last night was the performance I feared. All the compromises I'd been forced into depended on certain things being true, and all of those things changed; leaving me in a place where the show sounded like ass and no amount of knob-twisting at FOH could save it.
I'm composing what may be a career-ending email. They think they need a Sound Operator. I think they need a Sound Designer, as well as a Sound Engineer. At the moment, they are blocking me from doing what a Designer does, meaning they are basically getting an Engineer for Operator's wages.
And, well, this gives me a chance to talk a little about roles and functions. Because that may be a little confusing to those new to the industry.
In some very small theaters, it actually does operate like it does in the excellent web comic Q to Q; the designers run their own shows. Well, except for sets -- Set Designers are rarely employed as the running crew for their own show!
For most of the industry, the jobs are broken out. The kinds of skills, experience, and maturity required of a designer is incompatible with the kind of wages, stipend, or volunteer nature required of the person who will be there on every performance.
In that construct, a Sound Designer is the person responsible for creating effects, making mic plots (often the Director and Stage Manager will take a hand in those as well), setting levels. The Sound Operator is the person who sits in the chair in the booth pressing "Go" on QLab, or out in the house at a Front-Of-House (FOH) mixing position mixing the show from night to night.
Both of these titles are show-related positions. Both are hired for the show, by the show; usually contract for the Sound Designer, contract or stipend for the Sound Operator (when they are not a volunteer or a student picking up lab hours). At a well-organized theater, there will often be a Master Electrician as well; this is a salaried employee (or a seasonal hire) who maintains the equipment.
In many cases, the ME is also responsible for lighting, including the schedule and hire of crews for hangs, focus, and strike, and doesn't spend a lot of time cleaning up the sound closet. In almost all cases, the system itself is a Contractor install, and was probably designed by a professional audio contractor. The sad reality to this being, as well, in many smaller theaters and particularly in schools and recreation departments and churches, the sound system is designed by people who do corporate sound, not theater sound. The systems do decently at reproducing rock and roll music at moderate to high levels, and very poorly at supporting musical theater.
In any case!
As you move up to bigger ponds, the food chain becomes longer as well. Whereas for a drawing-room comedy in a box set a mere "Sound Operator" will suffice for playing back effects (and in most budget-conscious theaters these days, the left hand of the Stage Manager is forced to suffice in this role) to actively balance the vocals on 20-40 wireless microphones the Sound Operator needs a bigger title.
Preference among us working stiffs is to use the same one used by the bigger world of live music; FOH (or Front-of-house mixer). And when the number of mics and the budget around them is sufficient, they start getting little fish of their own.
An "A2" is a catch-all audio assistant. (The A1 -- not that you ever hear the term -- is the FOH). The A2 lives backstage and deals with all the things the guy or gal stuck out in the middle of the audience can't. On a big music show the A2 will supervise a whole crew of mic-shifters. But back to the musical; in those, the A2's primary job is their alternate title; Mic Wrangler.
A good Mic Wrangler will hover over every quick-change, making sure the microphone survives the exchange of clothing and wig and the quick dab of fresh make-up. They will get instructions from the FOH or Stage Manager to swap out ailing microphones, tape up dangling wires, and they also have the unhappy task of pulling batteries and wiping the things down after every sweaty show. The really good ones have their ears open all the time, and will prophylactically change out suspect microphones before you even have an issue at the console.
Go a few steps more professional, and the crew increases again. Dialog Mixer (one person mixes the vocals on songs, the other mixes them on spoken material). Band Mixer. Monitor Mixer. Backline techs (people moving among the musicians keeping their equipment in working order. In the live music world, mixing band monitors is so important a task it is not just a different person than the FOH, it is often in a different location altogether -- Monitor Beach, often to the side of the stage for easier access.
But that's not theater. Maybe Broadway, but not the theater most of us see. We're basically lucky to get an A2, and good luck trying to crawl into a typical crowded pit to fix a keyboard issue or plug a mic back in.
I'm composing what may be a career-ending email. They think they need a Sound Operator. I think they need a Sound Designer, as well as a Sound Engineer. At the moment, they are blocking me from doing what a Designer does, meaning they are basically getting an Engineer for Operator's wages.
And, well, this gives me a chance to talk a little about roles and functions. Because that may be a little confusing to those new to the industry.
In some very small theaters, it actually does operate like it does in the excellent web comic Q to Q; the designers run their own shows. Well, except for sets -- Set Designers are rarely employed as the running crew for their own show!
For most of the industry, the jobs are broken out. The kinds of skills, experience, and maturity required of a designer is incompatible with the kind of wages, stipend, or volunteer nature required of the person who will be there on every performance.
In that construct, a Sound Designer is the person responsible for creating effects, making mic plots (often the Director and Stage Manager will take a hand in those as well), setting levels. The Sound Operator is the person who sits in the chair in the booth pressing "Go" on QLab, or out in the house at a Front-Of-House (FOH) mixing position mixing the show from night to night.
Both of these titles are show-related positions. Both are hired for the show, by the show; usually contract for the Sound Designer, contract or stipend for the Sound Operator (when they are not a volunteer or a student picking up lab hours). At a well-organized theater, there will often be a Master Electrician as well; this is a salaried employee (or a seasonal hire) who maintains the equipment.
In many cases, the ME is also responsible for lighting, including the schedule and hire of crews for hangs, focus, and strike, and doesn't spend a lot of time cleaning up the sound closet. In almost all cases, the system itself is a Contractor install, and was probably designed by a professional audio contractor. The sad reality to this being, as well, in many smaller theaters and particularly in schools and recreation departments and churches, the sound system is designed by people who do corporate sound, not theater sound. The systems do decently at reproducing rock and roll music at moderate to high levels, and very poorly at supporting musical theater.
In any case!
As you move up to bigger ponds, the food chain becomes longer as well. Whereas for a drawing-room comedy in a box set a mere "Sound Operator" will suffice for playing back effects (and in most budget-conscious theaters these days, the left hand of the Stage Manager is forced to suffice in this role) to actively balance the vocals on 20-40 wireless microphones the Sound Operator needs a bigger title.
Preference among us working stiffs is to use the same one used by the bigger world of live music; FOH (or Front-of-house mixer). And when the number of mics and the budget around them is sufficient, they start getting little fish of their own.
An "A2" is a catch-all audio assistant. (The A1 -- not that you ever hear the term -- is the FOH). The A2 lives backstage and deals with all the things the guy or gal stuck out in the middle of the audience can't. On a big music show the A2 will supervise a whole crew of mic-shifters. But back to the musical; in those, the A2's primary job is their alternate title; Mic Wrangler.
A good Mic Wrangler will hover over every quick-change, making sure the microphone survives the exchange of clothing and wig and the quick dab of fresh make-up. They will get instructions from the FOH or Stage Manager to swap out ailing microphones, tape up dangling wires, and they also have the unhappy task of pulling batteries and wiping the things down after every sweaty show. The really good ones have their ears open all the time, and will prophylactically change out suspect microphones before you even have an issue at the console.
Go a few steps more professional, and the crew increases again. Dialog Mixer (one person mixes the vocals on songs, the other mixes them on spoken material). Band Mixer. Monitor Mixer. Backline techs (people moving among the musicians keeping their equipment in working order. In the live music world, mixing band monitors is so important a task it is not just a different person than the FOH, it is often in a different location altogether -- Monitor Beach, often to the side of the stage for easier access.
But that's not theater. Maybe Broadway, but not the theater most of us see. We're basically lucky to get an A2, and good luck trying to crawl into a typical crowded pit to fix a keyboard issue or plug a mic back in.
Thursday, November 6, 2014
So You Want to Learn Sound
...and it looks like a lot of stuff to pick up, and things like my posts (although neither John Huntington nor Dave "The Rat" are exactly comfortable reading in that regard, either) are scaring you.
Don't be.
Like a lot of things, sound is easier than you think to do (and is harder than you think to do well).
It is an Art, a Science, Engineering, and a Craft. Not all of these have the same weight. At the very top, at the most basic, it is about achieving a desired aesthetic effect. In the words of the immortal Duke Ellington, "If it sounds good, it is good."
And, oh, but this is where the part that resembles Science comes in. Because the final measure is your ears. Because even if you've done reams of engineering analysis, applied tons of highly technical test equipments, and so on, the final tweak is by nothing more than your ears and your own sensitivities.
But...and here's the big "but," before you can apply your ears in this way you need to absorb a little piece of the Scientific Method. To wit, to acknowledge Feyman's Dictum, "You are the easiest person for you to fool."
It is this simple; using tools like the double-blind test is what separates an audio professional from the snake-oil salesmen and credulous consumers of certain "audiophile" magazines. All the measurements in the world won't tell you if the system "sounds good," but the best Golden Ears in the industry won't save you if you are only hearing what you want to (or expect to) hear.
That's why we measure, that's why we test, that's why we calculate. That's why a sound person walks around, moves their head, covers one ear. That's why you turn off the microphones every now and then to figure out what is actually in the sound system and what is actually in the room.
What separates the good sound person from every other person in the room with an opinion -- from the producer down to the guitarist's Significant Other -- is that you know not to trust your own impressions blindly. You've learned to test your assumptions.
This happens at every level, in every part of what you do with sound. Don't assume the power is cut off when you are tying in; apply a chicken stick before you apply bare flesh to a 220 bus. Don't assume the mics are where you placed them last night, or the speakers are still working, or the hiss is the air conditioning.
And don't assume your own particularized aesthetic reaction and baggage of associations hold true for every listener. For some people, the mere sound of a dentist's drill gives them the shakes. For one director I worked with, "Do You Believe in Magic" was the most romantic piece of music they knew. I personally don't listen to a lot of pop, and rarely pay attention to lyrics anyhow, but many people will chose a piece of transition music based on the words. Which of us is right? It depends on the audience, the production, the surrounding material. It has to be a conscious choice, in other words -- not the untested assumption that what you like, others like, and for the same reasons you like it.
And, yes, this is true of all art and true of all theatrical arts in particular. Because a Fine Artist can turn up their nose at the philistines who "don't get their intent" but in theater we have a duty to communicate to the audience. They don't need to hear your aesthetic interpretation of what a train really means -- they need to know the 12:04 is pulling into town with the gunmen on board.
And that's it. That's the most important part.
Listen. Test your assumptions. Test your hearing. Educate yourself. Experience as many different sounds and environments as you can, explore how sound is shaped by the environment, and discover how perception is shaped -- including your own.
Learn both what a gunshot really sounds like, and what an audience expects to hear. Personally move a microphone around and hear the changes this causes in the sound it is picking up. Stick your ears close to a musical instrument and hear the very different sounds from different distances and directions. Listen to a room, then look at what an RTA is showing you about the room. Make changes, rinse, repeat.
Understanding acoustics (and electronics, and power distribution, and rigging...) will come. That's the engineering part. If you are contracting for a $20K sound system install in a new building, you'd better be doing some math. But a lot of the gear you will encounter is designed to be operated by musicians and amateurs. Most of it will be installed, existing systems (small theaters, clubs, etc.) They may not work optimally, but they will work, and the engineering aspect of your job is largely confined to asking someone else how to turn the thing on.
In practical terms, 98% of this job is a craft. And it is a lousy craft, in that an absurd amount of what you find out you need to know can't be easily derived from first principles. So forget the underlying science, and largely ignore the engineering. It comes down to, more often than not, memorizing long lists of "SM57 works well on snare, is harsh on vocals (unless you are doing metal)" or "If it looks like a Speakon connector but is blue, it is probably a Powercon connector" or "Tip in, ring out, except on Panasonic -- like the old Ramsa board -- where ring is in and tip is out."
Which also sounds like a heck of a lot to digest. And it is, except no-one knows everything and everyone continues to learn, constantly. The trick is, learn the gear you have to hand. Make choices of established, well-parameterized gear when you purchase (there's a reason there are so many '57s out there; people know what they do and understand their peccadilloes.) Pretty soon you will collect a little box of frequently-used parts and tools.
They aren't optimized, and they won't always work, but where there's no time to be creative or to engineer you reach into the box. 57 on the snare, check (I have two standard positions I reach for all the time). 100hz roll-off and 1.7:1 compression with a 44 ms attack on the wireless mic (my standard starting position in many theaters). Track_04 wind from my second purchased sound effects CD for, well, pretty much everything. (Track_06 is nice for cold drafts, snowy scenes, and the like).
Lastly, and to recap, think like a scientist (always double-checking your assumptions, doing math if you can and where you can, using test instruments if you have them, and otherwise checking to see if what you think is happening is what really is happening).
But act like a musician. You are a performer, whether you are creating canned sound effects or mixing a show from FOH. Your choices are artistic and aesthetic and individual. Mics can be turned on and off from a script, but mixing a show is a performance -- and a good FOH performance adds as much to a show as a good trumpet solo.
Don't be.
Like a lot of things, sound is easier than you think to do (and is harder than you think to do well).
It is an Art, a Science, Engineering, and a Craft. Not all of these have the same weight. At the very top, at the most basic, it is about achieving a desired aesthetic effect. In the words of the immortal Duke Ellington, "If it sounds good, it is good."
And, oh, but this is where the part that resembles Science comes in. Because the final measure is your ears. Because even if you've done reams of engineering analysis, applied tons of highly technical test equipments, and so on, the final tweak is by nothing more than your ears and your own sensitivities.
But...and here's the big "but," before you can apply your ears in this way you need to absorb a little piece of the Scientific Method. To wit, to acknowledge Feyman's Dictum, "You are the easiest person for you to fool."
It is this simple; using tools like the double-blind test is what separates an audio professional from the snake-oil salesmen and credulous consumers of certain "audiophile" magazines. All the measurements in the world won't tell you if the system "sounds good," but the best Golden Ears in the industry won't save you if you are only hearing what you want to (or expect to) hear.
That's why we measure, that's why we test, that's why we calculate. That's why a sound person walks around, moves their head, covers one ear. That's why you turn off the microphones every now and then to figure out what is actually in the sound system and what is actually in the room.
What separates the good sound person from every other person in the room with an opinion -- from the producer down to the guitarist's Significant Other -- is that you know not to trust your own impressions blindly. You've learned to test your assumptions.
This happens at every level, in every part of what you do with sound. Don't assume the power is cut off when you are tying in; apply a chicken stick before you apply bare flesh to a 220 bus. Don't assume the mics are where you placed them last night, or the speakers are still working, or the hiss is the air conditioning.
And don't assume your own particularized aesthetic reaction and baggage of associations hold true for every listener. For some people, the mere sound of a dentist's drill gives them the shakes. For one director I worked with, "Do You Believe in Magic" was the most romantic piece of music they knew. I personally don't listen to a lot of pop, and rarely pay attention to lyrics anyhow, but many people will chose a piece of transition music based on the words. Which of us is right? It depends on the audience, the production, the surrounding material. It has to be a conscious choice, in other words -- not the untested assumption that what you like, others like, and for the same reasons you like it.
And, yes, this is true of all art and true of all theatrical arts in particular. Because a Fine Artist can turn up their nose at the philistines who "don't get their intent" but in theater we have a duty to communicate to the audience. They don't need to hear your aesthetic interpretation of what a train really means -- they need to know the 12:04 is pulling into town with the gunmen on board.
And that's it. That's the most important part.
Listen. Test your assumptions. Test your hearing. Educate yourself. Experience as many different sounds and environments as you can, explore how sound is shaped by the environment, and discover how perception is shaped -- including your own.
Learn both what a gunshot really sounds like, and what an audience expects to hear. Personally move a microphone around and hear the changes this causes in the sound it is picking up. Stick your ears close to a musical instrument and hear the very different sounds from different distances and directions. Listen to a room, then look at what an RTA is showing you about the room. Make changes, rinse, repeat.
Understanding acoustics (and electronics, and power distribution, and rigging...) will come. That's the engineering part. If you are contracting for a $20K sound system install in a new building, you'd better be doing some math. But a lot of the gear you will encounter is designed to be operated by musicians and amateurs. Most of it will be installed, existing systems (small theaters, clubs, etc.) They may not work optimally, but they will work, and the engineering aspect of your job is largely confined to asking someone else how to turn the thing on.
In practical terms, 98% of this job is a craft. And it is a lousy craft, in that an absurd amount of what you find out you need to know can't be easily derived from first principles. So forget the underlying science, and largely ignore the engineering. It comes down to, more often than not, memorizing long lists of "SM57 works well on snare, is harsh on vocals (unless you are doing metal)" or "If it looks like a Speakon connector but is blue, it is probably a Powercon connector" or "Tip in, ring out, except on Panasonic -- like the old Ramsa board -- where ring is in and tip is out."
Which also sounds like a heck of a lot to digest. And it is, except no-one knows everything and everyone continues to learn, constantly. The trick is, learn the gear you have to hand. Make choices of established, well-parameterized gear when you purchase (there's a reason there are so many '57s out there; people know what they do and understand their peccadilloes.) Pretty soon you will collect a little box of frequently-used parts and tools.
They aren't optimized, and they won't always work, but where there's no time to be creative or to engineer you reach into the box. 57 on the snare, check (I have two standard positions I reach for all the time). 100hz roll-off and 1.7:1 compression with a 44 ms attack on the wireless mic (my standard starting position in many theaters). Track_04 wind from my second purchased sound effects CD for, well, pretty much everything. (Track_06 is nice for cold drafts, snowy scenes, and the like).
Lastly, and to recap, think like a scientist (always double-checking your assumptions, doing math if you can and where you can, using test instruments if you have them, and otherwise checking to see if what you think is happening is what really is happening).
But act like a musician. You are a performer, whether you are creating canned sound effects or mixing a show from FOH. Your choices are artistic and aesthetic and individual. Mics can be turned on and off from a script, but mixing a show is a performance -- and a good FOH performance adds as much to a show as a good trumpet solo.
Wednesday, November 5, 2014
The Mysterious Doctor Haas
I know, I've said -- and I believe -- that Directors, Producers, even many Musical Directors are unreachable not because they can't understand the underlying acoustics and psychoacoustics, and not even because they haven't realized that there are such underlying principles with very real (and dramatic) effects on the sound of a show, but because they are unable to make the cognitive leap to accepting that this matters; that acoustic realities don't go away just because you are ignorant about them.
If I believed differently, I think I would start by presenting one simple question:
"Have you heard of the Haas Effect?" (More commonly known these days as the Precedence Effect.) "More importantly, have you had the Haas Effect personally demonstrated to you in a controlled listening environment?"
IF for some reason the paradigm shifted and they were mentally open to noticing the cognitive gap, this question would provide the wedge.
Alas, that moment never happens. Conversations are always in the moment, about a specific issue, and framed and phrased in a top-down form. "The actors are complaining the side fill speakers aren't on. You need to turn them on."
Any attempt to explain that the issue is actually one of perception of sound sources; that if you modify the relative volumes or delays in order to bring those speakers into perceptual range, a different set of speakers will perceptually vanish and thus become potential target for the same complaint (as well as, of course, delivering less of what the actors actually need in an attempt to fix what they think is broken), it can only be seen in the frame of those discussions as a lame attempt to escape responsibility.
No matter what you say, what is heard will be, "I can't or won't fix the speakers because reversing the polarity of the transporter buffers or something...um, actually, because I'm lazy."
(Not an actual example, and the real ones are often less clear-cut. The way bass frequencies approach omnidirectional, meaning the rear of a full-range speaker is not "a place where no sound comes out," but, rather, a dangerous space full of extremely loud, muffled, time-delayed, low-end sound. As a for-instance. So, no, I don't care if your 500-watt keyboard monitor is pointed at the rest of the band. The audience is listening to it as well, and what they hear isn't good.)
If I believed differently, I think I would start by presenting one simple question:
"Have you heard of the Haas Effect?" (More commonly known these days as the Precedence Effect.) "More importantly, have you had the Haas Effect personally demonstrated to you in a controlled listening environment?"
IF for some reason the paradigm shifted and they were mentally open to noticing the cognitive gap, this question would provide the wedge.
Alas, that moment never happens. Conversations are always in the moment, about a specific issue, and framed and phrased in a top-down form. "The actors are complaining the side fill speakers aren't on. You need to turn them on."
Any attempt to explain that the issue is actually one of perception of sound sources; that if you modify the relative volumes or delays in order to bring those speakers into perceptual range, a different set of speakers will perceptually vanish and thus become potential target for the same complaint (as well as, of course, delivering less of what the actors actually need in an attempt to fix what they think is broken), it can only be seen in the frame of those discussions as a lame attempt to escape responsibility.
No matter what you say, what is heard will be, "I can't or won't fix the speakers because reversing the polarity of the transporter buffers or something...um, actually, because I'm lazy."
(Not an actual example, and the real ones are often less clear-cut. The way bass frequencies approach omnidirectional, meaning the rear of a full-range speaker is not "a place where no sound comes out," but, rather, a dangerous space full of extremely loud, muffled, time-delayed, low-end sound. As a for-instance. So, no, I don't care if your 500-watt keyboard monitor is pointed at the rest of the band. The audience is listening to it as well, and what they hear isn't good.)
Musical Paradox
I think I'm one step closer to understanding why there's never enough time to work music, and why I'm broke all the time.
Theater is a strange beast. Even at the regional level, a lot of people aren't making a living at it. They have day jobs. The difference is how it breaks down.
Actors (aside from a few Equity) get paid a stipend, or not at all. The schedule of rehearsals is organized around their need to get from their 9-to-5. What exactly they do during Tech Week, I'm not sure -- other than get no sleep, or use up some sick days.
Designers are in a worse cramp because their time commitment is over a shorter period, but has longer hours. Tech Week for a designer is not 6 pm till midnight, it can be 9 AM till 2 AM. The tendency seems to be for Lighting Designers to have part-time jobs in the business (often they are a part-time Master Electrician at some other theater; another frequent option is working for a rental company.) Flex-time is almost mandatory to make it possible to invest in the morning focus sessions, the daytime production meetings, and the all-night cue-setting sessions.
Set designers need to spend more time in drawing and in consulting with the shop as they build -- set design goes over a longer period. So more often set designers (and costume designers) are theater full-time. But because no show pays a living wage by itself, they are inevitably working more than one show at a time. What saves them is set design operates a lot more like a consultancy than does, say, lighting or Music Direction; the Set Designer doesn't have to be in the room for the entirety of tech.
A similar division exists in the labor; lighting techs (and running crews) work relatively limited evening shifts which are completely compatible with jobs, school, or even other designs. Techs are flexible parts; you don't need the same tech focusing a show that had been there last thursday hanging the show. Carpentry, on the other hand, takes place in the day and takes all day; set building tends to be freelancers who stick with a production over the 3-6 week build then move on to the next show (I did this for about ten years).
Musicians, then, become the odd bird. You would think that their calls (even shorter than actor's calls) would permit day jobs. But few musicians spend their working hours away from music. Most are doing music elsewhere -- often as not, teaching classes. This gives them the flex time they need to accommodate show calls: but it also puts them in a mindset where sub'ing is customary.
Since all the income a musician makes is by "service" (aka a set fee to sit down for a fixed number of hours and play) -- teaching classes, especially one-on-one, works out similarly -- they are very much not disposed towards arbitrarily adding hours. Quite the reverse; their financial viability depends on them being able to run from gig to gig and stuff as many into the available slots as they can.
And where does that leave me? Somehow, I seem to be caught with the worst of both worlds when it comes to hours. That comes from wearing two hats; as designer I need to be free to follow any arbitrary extensions to the schedule during tech, and as a performer I need to be there every evening -- and my call starts earlier and runs later than that of the actors, meaning it overlaps the traditional 9-to-5. The only work options that seem possible are flextime partime -- and it is best (because of the incredible impact on my load-in schedule caused by the arbitrary changes of others) that this part-time work have little in the way of set deadlines or mandatory friday meetings.
And this is still not the best economy to be looking for THAT understanding a job!
Theater is a strange beast. Even at the regional level, a lot of people aren't making a living at it. They have day jobs. The difference is how it breaks down.
Actors (aside from a few Equity) get paid a stipend, or not at all. The schedule of rehearsals is organized around their need to get from their 9-to-5. What exactly they do during Tech Week, I'm not sure -- other than get no sleep, or use up some sick days.
Designers are in a worse cramp because their time commitment is over a shorter period, but has longer hours. Tech Week for a designer is not 6 pm till midnight, it can be 9 AM till 2 AM. The tendency seems to be for Lighting Designers to have part-time jobs in the business (often they are a part-time Master Electrician at some other theater; another frequent option is working for a rental company.) Flex-time is almost mandatory to make it possible to invest in the morning focus sessions, the daytime production meetings, and the all-night cue-setting sessions.
Set designers need to spend more time in drawing and in consulting with the shop as they build -- set design goes over a longer period. So more often set designers (and costume designers) are theater full-time. But because no show pays a living wage by itself, they are inevitably working more than one show at a time. What saves them is set design operates a lot more like a consultancy than does, say, lighting or Music Direction; the Set Designer doesn't have to be in the room for the entirety of tech.
A similar division exists in the labor; lighting techs (and running crews) work relatively limited evening shifts which are completely compatible with jobs, school, or even other designs. Techs are flexible parts; you don't need the same tech focusing a show that had been there last thursday hanging the show. Carpentry, on the other hand, takes place in the day and takes all day; set building tends to be freelancers who stick with a production over the 3-6 week build then move on to the next show (I did this for about ten years).
Musicians, then, become the odd bird. You would think that their calls (even shorter than actor's calls) would permit day jobs. But few musicians spend their working hours away from music. Most are doing music elsewhere -- often as not, teaching classes. This gives them the flex time they need to accommodate show calls: but it also puts them in a mindset where sub'ing is customary.
Since all the income a musician makes is by "service" (aka a set fee to sit down for a fixed number of hours and play) -- teaching classes, especially one-on-one, works out similarly -- they are very much not disposed towards arbitrarily adding hours. Quite the reverse; their financial viability depends on them being able to run from gig to gig and stuff as many into the available slots as they can.
And where does that leave me? Somehow, I seem to be caught with the worst of both worlds when it comes to hours. That comes from wearing two hats; as designer I need to be free to follow any arbitrary extensions to the schedule during tech, and as a performer I need to be there every evening -- and my call starts earlier and runs later than that of the actors, meaning it overlaps the traditional 9-to-5. The only work options that seem possible are flextime partime -- and it is best (because of the incredible impact on my load-in schedule caused by the arbitrary changes of others) that this part-time work have little in the way of set deadlines or mandatory friday meetings.
And this is still not the best economy to be looking for THAT understanding a job!
Tuesday, November 4, 2014
The Flat Field and the Single-Point
In mic'ing a stage musical, one of the most basic choices is between reinforcement and amplification. The former is the more challenging, especially as spaces get larger. "Oklahoma," for instance, is an old-school musical and aesthetically demands the illusion that you are listening to the people on stage as their voice carries out across the open plains.
To achieve this effect requires clever speaker arrangements, careful adjustment of delay lines, and of course gentle amplification.
The opposite case is a show like "The Wiz," or "Grease"; pop songs in vocal techniques characteristic of amplified voices (as well as musical instrumentation with a sound defined by the electronics; keyboards, electric guitars, etc., as opposed to the strictly acoustic strings and woodwinds of the classic Broadway pit).
At my current theater our overall aesthetic (imposed at least in part by the quality of the voices we have available, our budgets, and the number of [noisy!] young people in our audiences) is to go in a direction of amplification for most shows. "Sound of Music" is the only show in my recent memory that really strove to create the illusion of un-amplified voice and band -- an illusion that the very young voices of our von Trapp children could not sustain.
At a previous theater, our, well, terrible equipment meant the amplified voice didn't match the natural voice. This made for a sort of uncanny valley you navigated with great care; below a certain volume, the reinforcement backed up the voice, but above that volume it took over and changed the character in an obvious (and often unpleasant) way.
My systems now are a much closer match, allowing me choice of where to go on that line; from imperceptible amplification to the point at which the natural voice is completely masked.
What makes the difference between reinforcement and amplification? In my mind, two elements dominate; volume, and delay. Following more distantly is acoustic placement (whether the vocals appear to come from a sonic space related to the stage, or whether they appear to come from speakers or another clearly defined point) and processing; the more pop material is usually treated with harder compression and a lot more reverb.
But here's where we get into system design. In both, the dominant concern is managing the wavefront. What hits audience ears is a combination of direct sound from actors and musicians, direct sound reflected off scenery and room, amplified sound from various speakers, amplified sound from foldback monitors (aka backline leakage), and amplified sound reflected by the room.
It is of course impossible (unless you are Meyer) to control all of these sources so they line up in the desired temporal order at all seats in the audience. But it is largely by juggling the temporal order that you attempt to manage at least some aesthetic goal out of what is otherwise sonic chaos.
My "flat field" approach for amplified music is actually quite simple. The speakers are time-aligned so energy from them all reaches the audience at roughly the same moment (obviously this relationship is different for each seat, meaning a lot of variance in seats across more than one zone of coverage). There is the smallest possible overall system delay and the system is run hot; overpowering any direct acoustic contribution.
This means the audience member is basically presented with a mix coming from a speaker. Any direct vocal energy from the stage, or backline leakage, arrives later and at a lower level and is psycho-acoustically folded in to the existing room reflections. In simpler language; the actual voice from the actual stage is perceived as part of the reverb tail of the processed sound.
The obvious drawback is that this requires powering over all other sources. If the drummer is loud (which they often are for pop shows!) I have to power over them; the original sound of the drummer has to reach audience ears as a distant, muffled echo of the heavily processed, close-mic'd sounds of the kit.
This means skirting local noise ordinances, frightening matinee audiences full of little children, and fighting aural fatigue. It also means that the equipment becomes super-critical; lose a wireless mic and you lose that singer. And a musician who is careless with the position of their mic can spoil the entire mix.
To contrast, the reinforcement technique primarily relies on an overall system delay. In this case, you are aiming for the magic corridor of about 10 milliseconds. Acoustic energy that fall within this corridor is perceived as originating spatially from the first source the listener hears. As long as the wavefront from the sound system falls slightly behind the direct acoustic sound of the actor, and the level is moderate (in a perfect world, you can get up to 10 db OVER the original sound without it being perceptible at all!) an illusion is maintained.
The more tricky element is that each source falls off in a different taper; the vocal energy of the singer is semi-directional, the sound of the pit orchestra is nearly omnidirectional and falls off roughly inverse-square, and although each single speaker is even more directional than the voice, the combination of all speakers in the tuned system can "fall off" in any arbitrary manner chosen.
So the tough task is to not just create the correct delay and an appropriate volume, but to maintain this relationship of delay and relative volume across a complex acoustic space. And this is a sensitive balance; if the drummer plays out, or the dancers demand more piano in their monitors, or a singer is tired and marking, the entire relationship can fall apart.
And because even something as basic and simple as The Precedence Effect (this psychoacoustic principle that can cause a reinforcement source to become literally imperceptible to all but trained ears) is impossible to explain to most people, the poor sound designer finds themselves completely helpless as those elements above (drummer, piano, marking singers, etc.) are arbitrarily changed by people who can't accept that they just wrecked the sound of the entire show.
Which brings me around to a new approach. It is an approach of basically brute force and ignorance, driven by bogosity, and will only work for some material. I call it the single-point solution, and it is what I am using on "Poppins."
The system is made possible by the fact that the band is in the pit, roughly along the same proscenium line as the main house speakers and the primary set of foldback speakers. So I've started by not even trying to control the band. Then pushing even more band at the stage with the most insanely loud foldback I've ever used. However, because of the physical alignment, this is more-or-less perceived by audience as just more pit leakage.
The mains are run hot with vocals, with system delay aligned to that same proscenium edge. This means that, roughly, all the various sources congeal in time at the proscenium edge. Since most of the amplification of the band is from foldback, it falls off as the natural pit does; in almost pure inverse-square. And since the vocals are shoved into just the mains (and the down-facing front fill) and are backed by stage energy (and vocal leakage from the insane levels of the band's own monitors; yes; vocals are feeding back from the pit before they feed back from the mains) they also come close to the inverse-square -- particularly when an entire ensemble is singing.
Or, rather, the colder I run the vocals, the closer the match will be to the taper from the band and stage noise. The lightly-amplified ensemble fares best; exposed soloists, less well.
With the delay speakers turned off, the wavefront that hits the majority of the audience is pretty much characterized as a large speaker the width of the proscenium, plus reverberation from the room. And at FOH in the back of the room, the reinforced sound and the direct sound (both desired sound like singing and un-desired sounds like stage noise) are in roughly the same volume relationship as that which the majority of the audience hears.
Where this falls down is obvious; the phase relationships are completely shot, and the musical material is smeared across time. Intelligibility suffers, fast arpeggios become sheer mush. The band suffers most here, as the majority of what is heard by audience has bounced off the set walls and returned to them in a nicely time-smeared multi-path.
The energy content across frequencies is even worse; most of the sources are indirect and highs are of course absorbed more readily, meaning the sound is heavy and muddy. And I can't make it up with the usual trick of putting more highs in the mains, because that would pull them out of the unified front.
The place where it goes most bad is the uncanny valley is back; if I push amplification too hot to try to power over, say, the tap dancing in "Step in Time," the reverberant, time-smeared, highs-attenuated material is pushed aside by direct, dry, full-frequency sound.
From close-mic'd instruments, of course; with the ensemble tap dancing directly over the pit, the only way to get the clarity in the foldback the ensemble requires is to stick the band microphones practically into the bells of the horns. Which means the amplified component of the total mix doesn't match at all what is leaking from the pit, and the band audibly changes sonic character when I have to shove the faders up to cover a scene change or blast my way over the top of tap dancers.
Probably the only thing that saves the mix at this point is the band themselves are varying wildly in their tone from number to number; mezzo from the brass and brushes from the drums, then suddenly a ear-tearingly crisp side-stick and a straining forte passage from the brass. So it isn't quite as obvious that their sound also alters dramatically every time the set moves, or a dance starts....
To achieve this effect requires clever speaker arrangements, careful adjustment of delay lines, and of course gentle amplification.
The opposite case is a show like "The Wiz," or "Grease"; pop songs in vocal techniques characteristic of amplified voices (as well as musical instrumentation with a sound defined by the electronics; keyboards, electric guitars, etc., as opposed to the strictly acoustic strings and woodwinds of the classic Broadway pit).
At my current theater our overall aesthetic (imposed at least in part by the quality of the voices we have available, our budgets, and the number of [noisy!] young people in our audiences) is to go in a direction of amplification for most shows. "Sound of Music" is the only show in my recent memory that really strove to create the illusion of un-amplified voice and band -- an illusion that the very young voices of our von Trapp children could not sustain.
At a previous theater, our, well, terrible equipment meant the amplified voice didn't match the natural voice. This made for a sort of uncanny valley you navigated with great care; below a certain volume, the reinforcement backed up the voice, but above that volume it took over and changed the character in an obvious (and often unpleasant) way.
My systems now are a much closer match, allowing me choice of where to go on that line; from imperceptible amplification to the point at which the natural voice is completely masked.
What makes the difference between reinforcement and amplification? In my mind, two elements dominate; volume, and delay. Following more distantly is acoustic placement (whether the vocals appear to come from a sonic space related to the stage, or whether they appear to come from speakers or another clearly defined point) and processing; the more pop material is usually treated with harder compression and a lot more reverb.
But here's where we get into system design. In both, the dominant concern is managing the wavefront. What hits audience ears is a combination of direct sound from actors and musicians, direct sound reflected off scenery and room, amplified sound from various speakers, amplified sound from foldback monitors (aka backline leakage), and amplified sound reflected by the room.
It is of course impossible (unless you are Meyer) to control all of these sources so they line up in the desired temporal order at all seats in the audience. But it is largely by juggling the temporal order that you attempt to manage at least some aesthetic goal out of what is otherwise sonic chaos.
My "flat field" approach for amplified music is actually quite simple. The speakers are time-aligned so energy from them all reaches the audience at roughly the same moment (obviously this relationship is different for each seat, meaning a lot of variance in seats across more than one zone of coverage). There is the smallest possible overall system delay and the system is run hot; overpowering any direct acoustic contribution.
This means the audience member is basically presented with a mix coming from a speaker. Any direct vocal energy from the stage, or backline leakage, arrives later and at a lower level and is psycho-acoustically folded in to the existing room reflections. In simpler language; the actual voice from the actual stage is perceived as part of the reverb tail of the processed sound.
The obvious drawback is that this requires powering over all other sources. If the drummer is loud (which they often are for pop shows!) I have to power over them; the original sound of the drummer has to reach audience ears as a distant, muffled echo of the heavily processed, close-mic'd sounds of the kit.
This means skirting local noise ordinances, frightening matinee audiences full of little children, and fighting aural fatigue. It also means that the equipment becomes super-critical; lose a wireless mic and you lose that singer. And a musician who is careless with the position of their mic can spoil the entire mix.
To contrast, the reinforcement technique primarily relies on an overall system delay. In this case, you are aiming for the magic corridor of about 10 milliseconds. Acoustic energy that fall within this corridor is perceived as originating spatially from the first source the listener hears. As long as the wavefront from the sound system falls slightly behind the direct acoustic sound of the actor, and the level is moderate (in a perfect world, you can get up to 10 db OVER the original sound without it being perceptible at all!) an illusion is maintained.
The more tricky element is that each source falls off in a different taper; the vocal energy of the singer is semi-directional, the sound of the pit orchestra is nearly omnidirectional and falls off roughly inverse-square, and although each single speaker is even more directional than the voice, the combination of all speakers in the tuned system can "fall off" in any arbitrary manner chosen.
So the tough task is to not just create the correct delay and an appropriate volume, but to maintain this relationship of delay and relative volume across a complex acoustic space. And this is a sensitive balance; if the drummer plays out, or the dancers demand more piano in their monitors, or a singer is tired and marking, the entire relationship can fall apart.
And because even something as basic and simple as The Precedence Effect (this psychoacoustic principle that can cause a reinforcement source to become literally imperceptible to all but trained ears) is impossible to explain to most people, the poor sound designer finds themselves completely helpless as those elements above (drummer, piano, marking singers, etc.) are arbitrarily changed by people who can't accept that they just wrecked the sound of the entire show.
Which brings me around to a new approach. It is an approach of basically brute force and ignorance, driven by bogosity, and will only work for some material. I call it the single-point solution, and it is what I am using on "Poppins."
The system is made possible by the fact that the band is in the pit, roughly along the same proscenium line as the main house speakers and the primary set of foldback speakers. So I've started by not even trying to control the band. Then pushing even more band at the stage with the most insanely loud foldback I've ever used. However, because of the physical alignment, this is more-or-less perceived by audience as just more pit leakage.
The mains are run hot with vocals, with system delay aligned to that same proscenium edge. This means that, roughly, all the various sources congeal in time at the proscenium edge. Since most of the amplification of the band is from foldback, it falls off as the natural pit does; in almost pure inverse-square. And since the vocals are shoved into just the mains (and the down-facing front fill) and are backed by stage energy (and vocal leakage from the insane levels of the band's own monitors; yes; vocals are feeding back from the pit before they feed back from the mains) they also come close to the inverse-square -- particularly when an entire ensemble is singing.
Or, rather, the colder I run the vocals, the closer the match will be to the taper from the band and stage noise. The lightly-amplified ensemble fares best; exposed soloists, less well.
With the delay speakers turned off, the wavefront that hits the majority of the audience is pretty much characterized as a large speaker the width of the proscenium, plus reverberation from the room. And at FOH in the back of the room, the reinforced sound and the direct sound (both desired sound like singing and un-desired sounds like stage noise) are in roughly the same volume relationship as that which the majority of the audience hears.
Where this falls down is obvious; the phase relationships are completely shot, and the musical material is smeared across time. Intelligibility suffers, fast arpeggios become sheer mush. The band suffers most here, as the majority of what is heard by audience has bounced off the set walls and returned to them in a nicely time-smeared multi-path.
The energy content across frequencies is even worse; most of the sources are indirect and highs are of course absorbed more readily, meaning the sound is heavy and muddy. And I can't make it up with the usual trick of putting more highs in the mains, because that would pull them out of the unified front.
The place where it goes most bad is the uncanny valley is back; if I push amplification too hot to try to power over, say, the tap dancing in "Step in Time," the reverberant, time-smeared, highs-attenuated material is pushed aside by direct, dry, full-frequency sound.
From close-mic'd instruments, of course; with the ensemble tap dancing directly over the pit, the only way to get the clarity in the foldback the ensemble requires is to stick the band microphones practically into the bells of the horns. Which means the amplified component of the total mix doesn't match at all what is leaking from the pit, and the band audibly changes sonic character when I have to shove the faders up to cover a scene change or blast my way over the top of tap dancers.
Probably the only thing that saves the mix at this point is the band themselves are varying wildly in their tone from number to number; mezzo from the brass and brushes from the drums, then suddenly a ear-tearingly crisp side-stick and a straining forte passage from the brass. So it isn't quite as obvious that their sound also alters dramatically every time the set moves, or a dance starts....
Monday, November 3, 2014
Poppins_03: Building In-Situ
Turns out this was not the best show to do a detailed account of the sound effects process. "Poppins" is an odd beast. The impression I got from the Broadway Cast recording, and the early orchestra rehearsals, turned out to be correct. The Director came to the same impression independently and strongly agreed with the choices we had to make.
Which is, in essentials, that this is a show about silences. About exposed melodic lines, about pauses, about the air between delicate, Mozart-like arpeggios. And it is practically through-composed; there is very little space that doesn't have music in it, and the music is harmonically and rhythmically complex. I watched our Music Director beating out a problematic section for a struggling singer; 4, 2, 4, 2, 4, 2, fermata, 3, 3, 2, 2, 2, 4 rit and long fermata...
This isn't like "A Little Princess." In "Princess," the material was strong rhythms (often expressed through simple ostinato) and long, legato vocal lines. This left a ton of sonic space where continuous background sounds were not only not distracting, but added (I think!) to the total effect.
In "Poppins," the rhythms are practically implied by light, lilting runs. Filling in the spaces harms the feel of the music. And as far as sound effects go, in addition, most of the magic performed by MARY is accompanied. It is in the score. So, all in all, there really aren't a lot of effects.
In addition to this design issue, there was a process issue. The sound effects we did have fell into two categories; transition effects, and spot "sweeteners" for some of the physical effects. For the former, discovering what worked for transitions required the entire acoustic environment to be in place and in timing; the orchestra, the actual set movement, and any additional cover we added (we added BERT to several of the transitions in order to give the audience something to look at while the set was being moved.)
The orchestra is still (several days after opening) changing their make-up, and the very books they are playing. In addition to altering what selections they are playing during transitions, if and when they will vamp, and their dynamics during transitions. The set crew is still refining; we didn't finalize which pieces are even on stage until First Preview, and the order that pieces move is still as of the close of opening weekend a bit...fluid.
Fortunately, my choice for transition effects was a new sample set. I'm using the venerable, low-profile shareware VSamp for this, driven by a 2-octave Ozone keyboard I can park right under the laptop to the right of the sound board. I re-assigned the modulation wheel on the keyboard to Expression (CC #1) via MidiPipe, and set Expression sensitivity in the envelope section in VSamp so I can use it for real-time fades and crescendos. This does mean I'm basically making a Vulcan Salute with one hand while mixing BERT's vocal and selecting scene presets on the LS9 mixer with the other, but it works.
In many places I'm simply sitting tacet and letting the band play. In others I'm forced to play out a little more with my rain and wind and thunder -- in order to cover the distracting noise of the flying system being reset for MARY's next magical entrance!
In the case of the practical effects, they were of course delivered late in the day (among other things having to wait for set building to finish so they could be installed) and the actors are still developing their timing around them. Fortunately the Director understood perfectly that I would be unable to cut these cues until all of that happened. And I'm making some small timing and volume adjustments still.
(Techie stuff; both of these are screen-shot via the Quicktime player and edited in iMovie. To get the audio stream off Reaper and into Quicktime sound was passed through SoundFlower. For "Poppins" these are my actual build files -- during the show, of course, rendered stereo tracks are played back from QLab. For "Princess," this is a comp of two mono busses recorded off the board during performance, combined with the effects files in Reaper in order to reconstruct a semblance of the performance experience.)
Thus, this was the abbreviated process:
For underscore/transition effects, auditioned wind and rain and birdsong over the speakers, tweaked EQ a bit and created seamless loops that could be loaded into the sampler. Tried that out during dress and set an overall level and sensitivity for the sampler before finishing the gain-staging.
For practical effects sweetening, waited until I saw the actors actually doing the effect, then constructed brief "story-telling" sequences for each using only my own memory for overall timing. They simply aren't going to match that closely. Fortunately the kitchen sequence can be broken into four individual cues (and another four for the magical restorations), meaning there's at least some vague correspondence between stage action and accompanying sound.
Really, though, "Poppins" proved a lot less interesting as an effects show, and a lot more interesting from a reinforcement and system design perspective. Which I may get into...when the dust clears a little.
Which is, in essentials, that this is a show about silences. About exposed melodic lines, about pauses, about the air between delicate, Mozart-like arpeggios. And it is practically through-composed; there is very little space that doesn't have music in it, and the music is harmonically and rhythmically complex. I watched our Music Director beating out a problematic section for a struggling singer; 4, 2, 4, 2, 4, 2, fermata, 3, 3, 2, 2, 2, 4 rit and long fermata...
This isn't like "A Little Princess." In "Princess," the material was strong rhythms (often expressed through simple ostinato) and long, legato vocal lines. This left a ton of sonic space where continuous background sounds were not only not distracting, but added (I think!) to the total effect.
In "Poppins," the rhythms are practically implied by light, lilting runs. Filling in the spaces harms the feel of the music. And as far as sound effects go, in addition, most of the magic performed by MARY is accompanied. It is in the score. So, all in all, there really aren't a lot of effects.
The orchestra is still (several days after opening) changing their make-up, and the very books they are playing. In addition to altering what selections they are playing during transitions, if and when they will vamp, and their dynamics during transitions. The set crew is still refining; we didn't finalize which pieces are even on stage until First Preview, and the order that pieces move is still as of the close of opening weekend a bit...fluid.
Fortunately, my choice for transition effects was a new sample set. I'm using the venerable, low-profile shareware VSamp for this, driven by a 2-octave Ozone keyboard I can park right under the laptop to the right of the sound board. I re-assigned the modulation wheel on the keyboard to Expression (CC #1) via MidiPipe, and set Expression sensitivity in the envelope section in VSamp so I can use it for real-time fades and crescendos. This does mean I'm basically making a Vulcan Salute with one hand while mixing BERT's vocal and selecting scene presets on the LS9 mixer with the other, but it works.
In many places I'm simply sitting tacet and letting the band play. In others I'm forced to play out a little more with my rain and wind and thunder -- in order to cover the distracting noise of the flying system being reset for MARY's next magical entrance!
In the case of the practical effects, they were of course delivered late in the day (among other things having to wait for set building to finish so they could be installed) and the actors are still developing their timing around them. Fortunately the Director understood perfectly that I would be unable to cut these cues until all of that happened. And I'm making some small timing and volume adjustments still.
(Techie stuff; both of these are screen-shot via the Quicktime player and edited in iMovie. To get the audio stream off Reaper and into Quicktime sound was passed through SoundFlower. For "Poppins" these are my actual build files -- during the show, of course, rendered stereo tracks are played back from QLab. For "Princess," this is a comp of two mono busses recorded off the board during performance, combined with the effects files in Reaper in order to reconstruct a semblance of the performance experience.)
Thus, this was the abbreviated process:
For underscore/transition effects, auditioned wind and rain and birdsong over the speakers, tweaked EQ a bit and created seamless loops that could be loaded into the sampler. Tried that out during dress and set an overall level and sensitivity for the sampler before finishing the gain-staging.
For practical effects sweetening, waited until I saw the actors actually doing the effect, then constructed brief "story-telling" sequences for each using only my own memory for overall timing. They simply aren't going to match that closely. Fortunately the kitchen sequence can be broken into four individual cues (and another four for the magical restorations), meaning there's at least some vague correspondence between stage action and accompanying sound.
Really, though, "Poppins" proved a lot less interesting as an effects show, and a lot more interesting from a reinforcement and system design perspective. Which I may get into...when the dust clears a little.
Saturday, November 1, 2014
Why is sound so hard to explain?
I think the heart of the problem may be that, like many technical tradesmen, you are talking uphill.
Which is to say; if Seiji Ozawas says he needs the violin section to bow up on the fingerboard during the opening of the Third Movement, everyone accepts that he probably knows what he is doing. But if a roadie says he needs to put a microphone in a particular place on the floor, it is a hell of a lot easier for some middle manager to "Make a decision" and axe that choice.
As a theatrical sound designer, you are more often than not thought of as another part of the running crew. When a Director, Producer, or even Music Director, or even an Equity cast member talks to you, it is always in the back of their mind that they make a hell of a lot more than you do, and stand a hell of a lot closer to the head honchos of the theater when the boozing and glad-handing is going on.
And that can't help leaking into their thinking; that your "opinions" hold less weight than theirs.
This is why there is nothing on this Earth that will cause any of those managers to suspect their own Dunning-Kruger syndrome. And it means there is no possible way to reach across the breach to them. They may intellectually accept that there are technical aspects about sound that they don't understand, but it is far too easy for them to rationalize this into handy compartments; that you as a sound designer understand "what the knobs do" but things like whether it matters where a microphone is placed or how loud the keyboardist turns their amp fall within the domain of things that they are perfectly capable of having a superior understanding to yours.
I just spent three days of bullshit with multiple people yelling at me to "turn the band up" -- and not one of them thought of actually listening for a moment and realizing the problem was that three quarters of the band was tacet for that section and there was nothing to turn up. The Music Director finally had to explain this to them. Fat chance there will ever be an apology to me, though!
I had these same intelligent ears sitting back by FOH trying to micro-manage my mix. This is for a multi-bus mix on multiple speakers, where each major element of the sound environment has a different volume profile over distance (aka stage noise falls off nearly inverse-square, voices are more directional and fall of slower, and the orchestra is in pit but is boosted in the monitors, front fills, and turned down in the delays.)
The only way to stop them from forcing me to ruin the sound for the majority of the audience was to electronically lock out half the Meyer sound system, leaving only the mains still in circuit. That way the front rows are getting blasted, but the difference between the sources is more-or-less smoothed out by the time you get all the way back to the rear of the house.
Oh, yeah. I've tried over the years to come up with a simple example to break through that mistaken paradigm. You wouldn't quibble over the pitch of a note with someone who had perfect pitch. But people seem perfectly comfortable with pointing at one speaker that is part of a finely tuned array and saying "That speaker over there isn't on." (Usually within the context of demanding it be "turned on" because in their untrained opinion it will make the mix sound better.) The idea that sound is a set of skills that includes perceptual skills the untrained person lacks is completely foreign territory.
Why can't they recognize this? Why can they never credit it? Sure, I've wasted time thinking of analogies -- optical illusions, say. Or the way that physics violates common sense all the time. That doesn't make physics wrong, it just makes common sense incomplete. The realms of quantum scale and relativistic velocities are quite outside any common-sense experience.
But I am convinced this is not useful. The problem is not a lack of realization that there are elements in how sound behaves in an environment that go against common expectation, and that sound is everything about psycho-acoustics; about perception being fooled by physics. The problem, I am convinced, starts in a more basic place; with the willingness to admit that there might be domain knowledge.
And worse, that the realities contained within that specialist domain have real-world implications that can not simply be waved away on the basis of "My common sense says it will work, so I'll just order the poor sound guy to do what I think works."
I've had producers ask if putting two mics on the same performer will give better gain before feedback. I've had a keyboard player (a keyboard player!) ask for a cable with USB-B connectors at both ends (so they could plug two slave devices into each other, obviously, which would be perfectly happy without a host involved, right?)
And just this last show, I had -- not a question, I had an order to double-condom a body-pack transmitter because "it is getting wet and that's why it is feeding back."
Not in the physics of this or any other universe. This is the point where you just back away slowly, because there is no point at even attempting to explain.
Welcome to my world.
Which is to say; if Seiji Ozawas says he needs the violin section to bow up on the fingerboard during the opening of the Third Movement, everyone accepts that he probably knows what he is doing. But if a roadie says he needs to put a microphone in a particular place on the floor, it is a hell of a lot easier for some middle manager to "Make a decision" and axe that choice.
As a theatrical sound designer, you are more often than not thought of as another part of the running crew. When a Director, Producer, or even Music Director, or even an Equity cast member talks to you, it is always in the back of their mind that they make a hell of a lot more than you do, and stand a hell of a lot closer to the head honchos of the theater when the boozing and glad-handing is going on.
And that can't help leaking into their thinking; that your "opinions" hold less weight than theirs.
This is why there is nothing on this Earth that will cause any of those managers to suspect their own Dunning-Kruger syndrome. And it means there is no possible way to reach across the breach to them. They may intellectually accept that there are technical aspects about sound that they don't understand, but it is far too easy for them to rationalize this into handy compartments; that you as a sound designer understand "what the knobs do" but things like whether it matters where a microphone is placed or how loud the keyboardist turns their amp fall within the domain of things that they are perfectly capable of having a superior understanding to yours.
I just spent three days of bullshit with multiple people yelling at me to "turn the band up" -- and not one of them thought of actually listening for a moment and realizing the problem was that three quarters of the band was tacet for that section and there was nothing to turn up. The Music Director finally had to explain this to them. Fat chance there will ever be an apology to me, though!
I had these same intelligent ears sitting back by FOH trying to micro-manage my mix. This is for a multi-bus mix on multiple speakers, where each major element of the sound environment has a different volume profile over distance (aka stage noise falls off nearly inverse-square, voices are more directional and fall of slower, and the orchestra is in pit but is boosted in the monitors, front fills, and turned down in the delays.)
The only way to stop them from forcing me to ruin the sound for the majority of the audience was to electronically lock out half the Meyer sound system, leaving only the mains still in circuit. That way the front rows are getting blasted, but the difference between the sources is more-or-less smoothed out by the time you get all the way back to the rear of the house.
Oh, yeah. I've tried over the years to come up with a simple example to break through that mistaken paradigm. You wouldn't quibble over the pitch of a note with someone who had perfect pitch. But people seem perfectly comfortable with pointing at one speaker that is part of a finely tuned array and saying "That speaker over there isn't on." (Usually within the context of demanding it be "turned on" because in their untrained opinion it will make the mix sound better.) The idea that sound is a set of skills that includes perceptual skills the untrained person lacks is completely foreign territory.
Why can't they recognize this? Why can they never credit it? Sure, I've wasted time thinking of analogies -- optical illusions, say. Or the way that physics violates common sense all the time. That doesn't make physics wrong, it just makes common sense incomplete. The realms of quantum scale and relativistic velocities are quite outside any common-sense experience.
But I am convinced this is not useful. The problem is not a lack of realization that there are elements in how sound behaves in an environment that go against common expectation, and that sound is everything about psycho-acoustics; about perception being fooled by physics. The problem, I am convinced, starts in a more basic place; with the willingness to admit that there might be domain knowledge.
And worse, that the realities contained within that specialist domain have real-world implications that can not simply be waved away on the basis of "My common sense says it will work, so I'll just order the poor sound guy to do what I think works."
I've had producers ask if putting two mics on the same performer will give better gain before feedback. I've had a keyboard player (a keyboard player!) ask for a cable with USB-B connectors at both ends (so they could plug two slave devices into each other, obviously, which would be perfectly happy without a host involved, right?)
And just this last show, I had -- not a question, I had an order to double-condom a body-pack transmitter because "it is getting wet and that's why it is feeding back."
Not in the physics of this or any other universe. This is the point where you just back away slowly, because there is no point at even attempting to explain.
Welcome to my world.
Subscribe to:
Posts (Atom)