Friday, June 27, 2014

Drums of the Gone South

I don't get it.

Once more, I've got a show in which the drummer is too loud. I have to crank the singers to hear them over the drums, and that is not without problems. I have to crank the rest of the band to hear it over the drums. I even have to mic the drums because even though they are too loud they don't mix with the now artificially-enhanced rest of the band without putting them in the same speaker system. And then I crank the singers even more to get them over the band.

And that only solves it for the back of the house, on the quieter numbers. If you are near the front, all you hear is drums. On the louder numbers, all you hear is drums.

And of course I'm already getting the little notes and comments, the "could you turn up the actors some more because we couldn't hear them." As if I wasn't doing everything technologically possible already.

This is what I don't get. Why are they asking me?

I would think this would be an insult to the musician. "We recognize you can't sing on pitch, so we're going to auto-tune your performance." Or how about "The tone on your violin sucks, so we're going to dub in a MIDI-triggered synthesized one instead."

How is this different? How is the drummer the one musician who can just say, "Aw, shucks, I guess I just can't play these things right," and it is everyone else's job to fix it? How can drummers not be insulted by being in this position?

And how the hell did this evolve? Take your typical violin. Evolved over hundreds of years of work to have a tone that would project over the brass. And to a sound that would blend with the rest of the symphony orchestra. No composer ever said, "Well, violas don't blend with anything, and the sound is grating, but I guess I'll just have to use them in my next symphony anyhow."

And take the violinist. Works for years learning how to control their tone. To understand and use dynamics. To sit in a symphony requires that they learn to subsume their individual expression into a sound that blends. Because the section is more important than their individual voice, and the symphony is more important than their ego.

And a lesser version of this holds for most of the instruments of the band, from the sax to the bass; they play with the other members, with an instrument and a tone that blends into a harmonious whole. Very very few bands are made up of a full set of Highland Pipes and a handful of unhappy dulcimer players who know not a single note they perform will every be heard above the skirling and the drones.

So how the freaking hell do drummers get a free pass? Why do we consider them untrainable? Where are the years of well-developed options that create a more controlled volume (and a better dynamic range) while maintaining good tone and control? How is it when you go into the drum section at a music store you will find a hundred options in heads, more cymbals than you can shake a stick at, and everything else to make more noise, but only in the very back are a couple of dusty packages of half-assed tape-and-felt solutions towards not deafening the rest of the band?

I refuse to accept that the state of the art is (as it is both in high-end recording studios and on Broadway) sticking the drummer in a concrete room with double-glassed windows and a two-way closed-circuit television monitor.

Because it isn't. You only have to visit a jazz club where pre-fusion small-combo jazz is played, to see that the very same equipment, and techniques drawn from that same performance history, produces a sound that blends perfectly with other un-amplified acoustic instruments like string bass and piano (which aren't soft per se, but are a heck of a lot softer than steel strings going through a half-stack of Marshalls)

My personal feeling is, a lot of what drummers say about how their tone will suffer if they try to play softly, well, is at best a half-truth. Sure, they are the victims of a self-fulfilling manufacturing cycle; they play loud, so they need drums that sound good when played loud, and as a result most drums are optimized so they sound best when...surprise...they are loud.

But the simpler truth is they are kids pounding on pots and pans getting off on the sheer noise. Their joy in playing is in the thumping sensation in their chest, and the bleeding sensation in their ears. Soft playing doesn't satisfy them on this primitive emotional level.

So they make the rest of the band, and an entire audience of paying customers, suffer for their personal satisfaction. Rather than subsume their egos to the needs of the production, they are the most narcissistic, selfish creature in the building.

And this is why they continue to be uncontrolled. Because they have too many easy rationalizations for their own self-serving behavior. They won't listen to criticism. Outsiders (like the poor FOH mixer) can't talk to them at all. But even music directors quickly realize all they will get it sulks and half-hearted attempts at control that are quickly forgotten, and they too give up on trying to control their errant musician.

(Plus, so many drummers have never attempted to play at controlled dynamics, they lack the techniques to play accurately at lesser dynamics. It is like the problem a wind player has reaching the upper octaves without putting sufficient pressure behind, but in this case it can be done -- it just is so different from what they are practiced at, there isn't enough time in rehearsal for them to learn.)

And thus the behavior is perpetuated. It becomes pervasive. Worse -- it becomes an established fact, an unspoken elephant in the room, that everyone tiptoes around. They can't even see the elephant any more, they are so used to its presence.

So they blame the sound guy.

But Soft, What Ware Through Yonder Window...

I come today to sing praises to a few bits of useful software. At least, useful to a Mac-oriented, very much budget, sound designer and technician.

Wireless Workbench As of version 6, this has become fabulously useful software regardless of what brand of microphone you use. No longer do you have to use the nearest approximate Shure range; it has the profiles of every major line already programmed in. It has been redesigned in other ways, too; it is now vastly easier to fix one bad frequency, or generate a couple new compatible frequencies within an existing line.

If there is a downside, it is that it is too well supportive of systems I can only dream about; much of the software is oriented around automated scanning, remote updating and monitoring, and so forth. And that makes parts of the software a steeper learning curve than they might be.

Oh, yeah; what is it? It is free software from Shure that lets you coordinate frequencies of all of your wireless microphones, including mapping around interference sources, and checking for potential interference between microphones (the dreaded intermods).

Reaper With a price so low as to be practically free, and an incredibly open, transparent, and manipulable framework, this is now the preferred sequencer/DAW for the creating of complex sound effects. The downside to Reaper is that it is so new in how it thinks, the learning curve is -- again -- rather steep.

Like PhotoShop, there are usually several different ways of doing things. It is also optimized towards being very, very fast to work, and that can be a downside if you can't hold all the gestural commands and contextual options in your head, or lose concentration. It is rather like driving a sports car, in that it is so fast and responsive it can also get away from you if you don't pay attention.

There are still ways it has of organizing things that I quibble with, but I quibble silently; because it has been my experience that there is probably an alternate to each method already in the program and I just have to keep searching for it.

Audacity Open-source, extensible, freeware. Seeing a trend yet? You can create a whole cue in Audacity, but where it shines for me is in quickly trimming and normalizing samples, also in clean import and export in the variety of formats sometimes necessary (including but in no way limited to exporting mp3's with a fine degree of control over the encoding, and multi-track WAV files that can be used for massive multitrack effects in QLab.)

Audacity ships with so many tools it is easy to get lost in them. I've lately been using the noise reduction tool. One issue I have with Audacity, and why so much of my real sound manipulation is being done in Reaper, is that Audacity comes out of a more command-line approach. You punch in numbers and run an effect off-line. I prefer to apply an effect in real time and play with the knobs while I listen.

Qlab Okay, I'm a bit on the fence here. It used to be an extremely affordable, low-profile program that did one thing very well. Now it is adding bells and whistles to justify a price that is climbing all the time. It is still the most efficient and flexible multi-track multiple-polyphony playback engine there, capable of running a complex linear sequence as well as presenting non-linear material (and integrating in various ways with other equipment.) But for the latter, Max/MSP is vastly more flexible, and costs no more.

Monday, June 23, 2014

From Beijing Opera to Fart Noises

This past few weeks has been a trip from, well, not from exactly sublime, and not to exactly ridiculous, but certainly of extremes in sound design.

To me, every show is different. Some times that difference is best expressed as a choice of palette. Other times it seems to reach deeper into a philosophy of design.

I did a children's production of "You're a Good Man, Charlie Brown" a while back, and in keeping with the high concept of the production -- that all the action on stage was an elaborate make-believe being played out by children on a playground -- many of the key sound effects were sourced from children's voices. I had a very amusing VO session where I asked our young cast to improvise airplanes and machine guns and AA for the Snoopy vs. the Red Baron number that opens Act II.

For a recent production of "The Drowsy Chaperone" my philosophical take was that the sounds of the play-within-the-play were obviously pre-recorded effects, but everything that belonged in the world of The Man in the Chair was ultra-realistic -- as if we were actually inside his New York apartment. This was why I rang a real telephone bell, and also attempted to use the actual speakers of the vacuum-tube phonograph prop.

Some of these choices are dictated by the material. It is traditional, for instance, to use a recording of Mel Brooks himself at one place in "The Producers." Many effects -- particularly for more "stagey" or meta-theatrical works -- may be implicitly written into the musical score, with the expectation that they are to be performed by the pit orchestra.

So, Disney's "Mulan Jr." This is another of the youth-theater friendly Disney musicals. They are a mixed blessing; on the positive, you get the name recognition of the originating Disney property, and you get a script which is designed to make maximum use of the large casts typical of children's theater, providing enough interesting roles that no-one need feel left out. The musicals are very professionally put together, with detailed staging notes, lots of neat little side notes, and other material of interest to the young actor (or the inexperienced theater company).

On the flip side, they are also in at least some part labors of love by people who really, really know musical theater. So there are theater in-jokes, but are also filled with musically challenging material. They inevitably suffer from the curse of Meredith Wilson (that is, at least one song in which several different people are simultaneously singing different musical lines).

They are also expansions of source movies that generally had many fewer songs. A number that drops again with licensing issues (or perhaps just performance practicalities) for some of the pop music that accompanied the original. And, as in all musicals, the added songs are equally hit, and miss.

Mulan of course already walks the difficult line of cultural appropriation. The team that put the movie together went to China and collaborated with musicians there. The resulting score is a subtle blend -- avoiding the ching-chong of "Flower Drum Song" and hinting of traditional Chinese melodies and song structure even as it is a thoroughly Western score. As originally scored, there are parts for a small number of traditional Chinese instruments to pad out the rather large orchestral needs.

Well, sort of. There actually isn't a score per se. When you license the script (from MTI) you get a rehearsal piano score and pre-recorded backing tracks. It was not expected that you attempt to perform live accompaniment for your production of the musical.

Well, in any case, our original production concept was to consciously use some of the staging and vocabulary of the Chinese Opera. Minimalist scenery and props, color cues from the Chinese Opera, masks, a dragon dance costume for the character MUSHU, etc.

And it seemed to me that this concept translated into seeing the sound effects as being created as they would be for the Beijing Opera stage; instrumentally. The script itself claims that effects are produced by the visible actors.  In practice, however, this was also as much miss as it was hit.

My experience was a sometimes-painful collaboration with the music director. He had the chore of getting through the wealth of material with limited pit resources. The best choice was piano, bass and drums, and he was busy enough just writing out parts for the other instruments out of the vocal rehearsal score.

We messed around with adding synthesizer keyboard, or additional (mostly improvised) percussion elements. We also flirted with the idea of pre-recording some elements of the scene (not necessarily anything that was already in the written score). The above excerpt is the only survivor of those experiments.

In the final production, the score was realized by the pit, but the Opera elements were brought in with selected sound effects, some character accents, and the introductory percussion (I stripped out all the pitched elements).

The "wind" and "avalanche" cues heard above were also used during the big action scene. When the actual "armies" appear in the play, the cast was pounding on the floor with bamboo sticks. I went and recorded those same sticks and they are the prominent percussive element in that cue.

To break down who did what took several days of crawling through script, vocal score, Disney backing tracks, director's notes, rehearsal notes and recordings, and a full orchestration we managed to find online (in Finale format, but Wine wins again; I was able to run a free Finale reader written for Windows).

In the end, it was a lot of work, the blend between the worlds was not as smooth as it could be, and we fell far short of the kind of intense concentration on that staging conceit that is used so effectively (although from a different cultural tradition) in Sondheim's "Pacific Overtures." But I think in the end the use of a few gongs here and there, and the concentration on percussive elements for sound effects instead of realistic samples, gave the show a distinct and interesting flavor.

A half-dozen shows later and I'm doing "Shrek." The high concept for this production was pop-up book. In my mind, a core idea was that of escaping the dominant narrative; that SHREK, FIONA, and the other fairyland creatures struggle against the expectations of others. "You mean the stories that say I'm a 'wicked' witch?" asks one. And against their own internalization of these narratives as well; FIONA's "This isn't the way it is supposed to be!" and SHREK's own "I guess I'd be a hero..." Which is beautifully realized in SHREK's song at a key moment; "An ogre and a princess, it's complicated. You've never read a tale like this. But fairy tales...should be updated."

But this didn't help me much for sound. I was left with more low concept than high concept. Because the set realized the "pop-up book" idea with lots of things that unfolded and turned, and changes were made in partial light, I chose to accent the scene movements. My original desire was to put paper elements in it, as if it was book pages or pop-up illustrations in motion, but the final versions were various sorts of wood and stone sounds.

The other idea I went in with is that SHREK is an ogre and as the play says, "Ogres like nasty." So I wanted plopping and squishing and other disgusting sounds in there.

And, after all, there is the fart song. It is basically a retread of "Anything You Can Do" from "Annie Get Your Gun," only with farting and belching as the contest. I had the delightful experience of putting every fart and belch from my sample libraries into one long file, and adjusting EQ and pitch and normalizing until they all sounded like they existed in the same sonic space. Then I assigned them across the keys of of MIDI keyboard using the low-profile shareware sampler VSamp Pro.

To reduce the clutter in the pit, I'm actually running the MIDI from a 24-key mini-keyboard through a stage snake back to my own computer, where the sampler player is resident. This also allows me to assign the bodily noises to a dedicated channel on the FOH mixer, so they can be dialed in to the appropriate level with whatever vocal energy the cast has that day.

The most difficult cues were the dragon cues. Partly because the realization of the dragon in scenery and props and costuming happened so very late in the technical process, meaning I didn't have anything to guide me by. But largely because it just took so much work to come up with the vocabulary for the dragon.

The above is my third attempt to create the wings effect. There would be more than three attempts, but we already opened.

The basic wing sound comes from a foley session I did way back on "Peter Pan." I set up a nice mic in a quiet room and whipped a sheet around in front of it. Unfortunately the room wasn't as quiet as it could be, and I had to apply excessive levels of noise reduction to the result.

(As a sideline, the abandon-ware SoundSoup has resurfaced with a new company, and I spent the money to upgrade it to compatibility with my new computer. And I haven't used it yet; the noise reduction available in the free software Audacity has suited me just fine so far.)

I combined clips of the sheet noise with some generic "whooshes" from the library. I tried adding a snip from a fluttering wing to give a little more character, but in my mind dragon wings shouldn't have feathers. I also tried resonance filters and even ring modulator to get a little more lizard skin feel in there, but none of that quite worked and I took it out again after playing with it for several hours.

Of course there's a whole bunch of pitch-shifting and time-stretching going on, as well as a general down-sample, and in the final cue, subharmonic synthesis, false stereo, and a slathering of reverb.

By the time I was ready for this final take we were rehearsing with orchestra and in time, so I could take a reference track. The onboard microphone of the laptop was good enough for that. I assigned up-stroke and down-stroke to a sampler, and "performed" the wings into a new track in Reaper.

In previous attempts I'd added a little pitch bending for Doppler Shift -- which was a lot easier once I discovered Reaper's Pitch Envelopes. Unfortunately the pitch algo is not as flexible as the one I'd been using for this kind of effect previously, and the artifacts were simply too objectionable.

Instead of doppler (and volume envelope) I combined the wings track with a "by" of a hanglider from the BBC sound effects series. That and a little bit of pitch-shifted rattlesnake -- purchased for the show at SoundDogs.

These elements gave me most of a vocabulary to build the other dragon actions; tail lash, head appearance, and a big chomp on FARQUAD...but there wasn't enough time left before opening night to properly employ them.

To add to the complexity of the wings sequence, the director needed them to pass into the rear surround speakers we hung for this show. I had originally baked this into the cue, but vagaries in the performance timing made it necessary to split it up into manual fade cues in QLab. So the first "Go" in QLab starts the wings, the second one pans them into the rear speakers, the third returns them to the stage. And with my free hand, I am also attempting to throw some of the DRAGON's vocals into the rear speakers to follow the wings there. Which has not so far worked as well as I'd hoped but I have seven more weeks of performance to get it right...

Sunday, June 22, 2014

Seven Minute Lull

You know the gag. Someone is at a crowded party, or a construction site, or at a concert, and trying to impart a private bit of information in an increasingly loud stage whisper. Inevitably, the surrounding volume suddenly drops -- right at the moment where it will cause the most embarrassment.

Well, sound effects fall prey to this.

The dynamics of a musical, particularly, mean that playing an effect one second later (or one second sooner) will mean it is either completely lost, or inappropriately loud. Often enough the same effect can be both...if it lasts long enough, that is.

Sound effects are so exquisitely sensitive to context anyhow. Played in rehearsal, an effect may sound brittle, over-bright, and far too loud to be realistic. Played in the first dress with full orchestra, the same sound is suddenly muffled, and way too soft to be believable.

Of course neither conductors nor directors have any grasp of this. The conductor will randomly add a new keyboard part or a drum riff right in the middle of your cue. Meanwhile, the director will insist on setting levels...and everyone who isn't a sound designer will go around thinking that it is possible to set one single level and it will just work, all the time.

A live band can flex. So can an actor. Their performance dynamics alter as the audience noise, the acoustics of the space, and so forth change around them. Pre-recorded sound effects don't flex the same way. Their volume does not depend on audience energy or room temperature but is set by electronics.

The only save a canny sound designer can make is to put a real-time fader on background sound effects. That way, they can be dialed in from moment-to-moment with the same front-of-house mixing that is applied to singers and orchestra. But even that won't save spot effects; one can't react fast enough. So from day to day, performance to performance, and within a single performance even, the same effect will be at the wrong volume as often as it is the right volume.

This is particularly an issue around certain climactic effects. Sometimes you have an effect, like a gunshot or a guillotine, that is in the middle of bombastic music or in a moment the ensemble is reacting to. Their screams (or the orchestra's kettle drums and keyboard brass) will cover up your sound unless it is played 20 or more db higher than is otherwise called for.

And, of course, it is inevitable that the ensemble will decide to scream at a slightly different moment, or the band hit the big drum flourish in a slightly different moment. Less than a second off is all it takes...and what was the appropriate gunshot becomes a ludicrously, painfully, loud bad-sounding effect that will make everyone turn in their seats as the poor designer shrinks in embarrassment.

Monday, June 16, 2014

Move Over PDQ

Stumbled across this file again and thought I'd throw some comments at it:

Did this about ten years ago. I forget the details, but I created it for a total conversion plug for someone at the old Escape Velocity forums. He specifically requested the opening chords (which are from the scene on Jabba's barge), and I think wanted to reference the original Escape Velocity loading screen music; the iconic "Mars, Bringer of War" from Holst's "The Planets" suite.  The latter imposed the 7/8 meter of most of the piece.

I did it on rack-mount synths, mostly Roland "Orchestra Module" and "String Module," the latter of which had the unusual col legno technique as a patch (the strings are struck with the back of the bow). The polyphony offered by these racks allowed me to write implicitly for first violin, second violin, viola, 'cellos, and double basses, as well as trumpet, trombone, and french horn sections, with addition of solo horn, flute, clarinet, piccolo, as well as vibraphone and of course orchestral percussion. There are actually very few sustained string notes; even the upper register "pads" are usually manually-played fingered tremolo.

Oh, yes; and it steals shamelessly from everything.

The opening and closing are direct John Williams references. The vibraphone and flutter-tongue flute is a John Barry reference. As I said, the 7/8 is from Gustav Holst.

The extended col legno from the string section, particularly the skittering glissando, is reference to Jerry Goldmith's skittering strings in "Alien," and the percussion-heavy re-entrance of the main theme was meant to take after the same composer's Klingon battle music from "Star Trek the Motion Picture." I think I caught the flute/piccolo tremolo on top of heavy brass chords from him, too.

I'd been reading about doing brass in tight fourths and I think some of that is in the main theme. Or maybe not; my grasp (and application) of theory is, shall we say, weak.

Of course the transition to big open chords with the re-meter to 4/4 also has those old chestnuts -- I was particularly thinking "Great Gate of Kiev" from the Ravel orchestration of Mussorgsky's "Pictures at an Exhibition," but the late introduction of a chorus as well as the apparently inevitable church bells/tubular bells goes back to at least, oh, 1812.

I can't say I write all that much better now. All I can say with confidence is that I write less often.

Friday, June 13, 2014


Over breakfast (and over the couple days of a rather vile little bug) I've been watching video reviews of some of the more well-known video game series. Fallout, for instance. And of course Mass Effect.

I admire the cleverness in working in a vaguely in-universe reason for you to be able to customize your character. (Puts me in mind of a long-forgotten Infocom game that determined your avatar's gender by which bathroom door you went in). In Mass Effect, the conceit is a battle damaged computer and you are "helping it" rebuild the personnel files on Commander Shepard. On Fallout 3, the mechanism is played for hilarity with (among other things) the ridiculous "aptitude test" which may or may not sink points into various skill slots (unless you chose to talk to the teacher afterwards, in which case you are allowed to edit the results).

I'm familiar with the discussion. The idea that being able to customize the character makes it easier for you to identify with them. Funny thing, though, this doesn't seem to be a problem in literature (and the odd experiment aside, it is generally accepted that the reader identifies strongly with third person, and self-identifies with first person if the first-person narration is not too obvious.)

A better argument is because the default character seems so often to be a white male, being able to customize is a way of making the game experience more inviting to those who would not chose to describe themselves in that way.

One problem is that the choice is very much cosmetic. There are some dialog changes that happen if you are female instead of male, but the universe doesn't seem to care if you have a beard or not. And, well, I can't speak to the kind of society the universe of Mass Effect may have, and perhaps they truly are color-blind. And I haven't played it enough to know if there are character generation choices beyond "surliness" that also show up in the dialog tree. If for instance being a "lone survivor" triggers different reactions among people than being a "war hero."

I'd be surprised if these changes went further than a few dialog tracks, however. Which is enough work already; the word is "tree" because each choice multiplies. If it mattered that you were a bearded spacer versus a smooth-faced female war hero, the voice team would need to record, process, and cut animation to 8 different choices for every line of dialog. Having one mission offered only to Shepards who were bald, though, would mean months of dev team building levels that most players would never see. And that's not real likely.

So I question the utility. The situations, and the reactions, the are part of the story of Half-Life happen because Freeman is a young physicist with an interest in physical fitness. That mix puts him at the Resonance Cascade in an HEV suit, and makes believable the interest in him from Black Mesa East twenty years later, and his ability to survive everything the Combine throws at him, and that shared history ties him to Eli and Alyx and even Magnusson. You would not get this (relative) richness without the specific flavorings of that character.

Well, maybe. I could say this is even more the case for, say, Bioshock -- you have to be one specific individual, because that's a key plot point. But in all honesty I can imagine how the presented dialog and NPC interactions and back-story would work if Freeman was a scrawny Jewish undergrad instead of his buff goyim post-doc self.

And that's kind of what those of us who are out of the assumed norm do anyway; create head canon for our favorite characters in which their internal life is rather more (and often different) than what is presented on the page.

Be that as it may, it doesn't seem to have turned off any gamers to find they are the (female) Chell of the Portal series. And they don't even have the excuse of the Tomb Raider series, where "being" Lara is possibly of secondary import to being behind Lara for most of the game. Chell is by contrast not seen (unless you make an effort to form portals that allow you to glimpse yourself).

Also worth noting that Chell's gender isn't important to the game. The only place where gender roles even possibly come in is when Glad-OS tries out a few insults based on stereotypical concerns; Chell's weight and (possible lack of) fashion sense. But even Glad-OS gives that up in boredom after a few funny jabs ("Look at you, soaring through the sky like an eagle...piloting a blimp.") Otherwise, her gender is a refreshing non-issue.

So perhaps what we need is not the option for a Femshep, or different beard colors, but fully-defined characters who aren't always white males. Characters who are integrated into a universe and have relationships with other characters in ways that reflect their backgrounds. Characters who offer the experience of being female in a traditionally male environment -- or female in an accepting multi-gender environment. Or a person of a unique culture (real or imaginary) which is experienced from inside during the course of the game.

The usual argument is that most gamers are white middle class guys. Which is true, but only by a few percentage points. And when this is pointed at, the next argument is that most gamers are perfectly happy with an avatar who is also either a white middle class guy, or someone said guy would feel comfortable hanging with. The usual "people keep buying the games, so why should we change anything?"

And the committee-driven, conglomerate-owned, expensive-investment world of major gaming properties is probably going to be stuck in this rut for some time yet. If we want to see more games in which not just the avatar character, but the very gaming experience, offers something other than, "You are Colonel Joe Smith of the Space Marines, now go out and kill some purple guys with spikes on their heads," it is going to come from the independent games.

Thursday, June 12, 2014

Brand Loyalty

Otherwise known as "They changed it, now it sucks."

You don't have to drill down very far to see that most of the products at the supermarket are the same thing in different-colored wrappers. There's a range here; aspirin is a regulated substance and to within error bars, every tablet is identical. You are paying for confidence in their quality control, at best. Soft drinks, although they are generically water and sugar and a dash of artificial flavoring, at least allow picking a pleasing flavor as a valid choice.

If you are like me, you gave the red bottle a try, and the green bottle, and decided that the blue bottle was "just right." So now each time you shop, unless there is an intriguing novelty being offered or some other reason to explore, you can just reach out for the same bottle each time.

Manufacturers know this. They stamp not just label, but color and pattern choices, bottle shapes, and a range of other products that will tempt you into brand loyalty. They reinforce, making sure you know when you got Brand Z milk you know it is the same Brand Z as your butter, and that tempts you into trying the Brand Z yoghurt.

Or they should know this. But what happens in reality?

As soon as you've settled on a product that meets your needs...they change it. Perhaps they change the bottle to something that they hope will entice the wandering eye of new customers. Unfortunately, they changed it so far you can no longer recognize it, and end up trying something different.

Or -- more often -- they change the product. Because having achieved product satisfaction and brand loyalty, what would be more natural than to get rid of everything that had led you to that choice in the first place?

Add to this the non-Moore's Law of most mass manufacture; the onus is to cut costs wherever possible, so using thinner thread, less metal, cheaper materials, etc., etc., etc. will always appear as a win to the paper pushers.

The game actually appears to be;

1) Entice you into trying something via advertising, novelty, or just bright flashy colors.

2) Make it just good enough you are confirmed in that choice.

3) Each fiscal cycle, make it a little crappier, counting on customer inertia and diffuse brand loyalty to keep the customers purchasing.

4) When you've reached the total crap stage, dump the product and make something else -- preferably even cheaper, but by this point the only customers you are trying to please are those who already got burned by a competitor who did the same damned thing to theirs and are desperate for anything different.

And, yes; this applies not just to processed food-like products, but electronics components, software, clothing, cars.....

(I'm not the only person I know who is hoarding a few things -- like Pilot Precise rolling-ball pens -- against the time when that manufacturer decides it makes more financial sense to entice idiots into trying something "new" that can be made even cheaper, rather than continuing the boring course of continuing to offer a product that actually works.)

Friday, June 6, 2014

The OTHER Kind of Theater Person

I was just reading the rehearsal notes from last night and, yeah, sometimes it bugs me that I don't know all these theater references (and pop culture references). I've never seen a show on Broadway (or the West End) -- heck, I've only seen a show at the Curran once or twice. I don't follow the Oscars, or the careers of famous actors or designers. I don't have the slightest what is being written today, unless I happen to get hired to work it.

And sure, some of that is just not being that interested. But it is also that there are only so many career-related hours in the day. To work technical theater, I have to spend at least an hour every day studying the ever-evolving technology map, plus constantly improving my understanding of acoustics, psycho-acoustics, music theory, electronics, information theory, etc., etc.

But the divide is of course more basic. You could see it back in high school; the techies would be the small group in the back of the cast party, still wearing crew blacks and not talking to anyone else.

Actors are generally in for a longer haul but with a smaller time commitment per interval. This means they need to be networking through the rehearsal process (and can often take on multiple simultaneous commitments).  Techs and designers are on for a shorter haul with longer days. Every hour spent in tech rehearsal is a second hour of work that needs to be done before opening. For me, at least, that means I don't network during a job, or do anything other than work and sleep (and not a lot of the later, either).

Which is a good fit to what seems the archetypal character; actors are social, egregious, popular -- and they know the mainstream culture well. Techies are loners, outcasts, and geeks -- and what they know is technology.

The Venn diagram has intersect, but between the norm of each group there is no meeting of minds.