Friday, June 28, 2013

(Onlso) Oh, Nice! (/Onslo)

I got the prototype DuckNode working in time for the meeting.  Attached accelerometer (and an LED for the RSSI pin) to an XBee, and stuck those on top of a AAA battery holder.  Using Rob Faludi's XBee API library for Processing, was able to throw together a sketch that read off the transmitted values and triggered a MIDI note event when the right motion was detected.

Stitched up a simple wrist strap, and I was able to fling my hand out and trigger a sound effect that way.



But.  Well, first off, at this point it doesn't really do anything a sharp-eyed Stage Manager and a button can't do.  Second, it is a bit too much of a lump with battery pack and all.  Might be acceptable with a LiPo, and might work better with the parts re-arranged a little, but basically is more than you want on your wrist if you can help it. 

The concept of the DuckNode is that it is complete; no messing around running wires under a costume, just strap on the sensor pack and do the scene.  Or stick it on the set as a remote sensor, or even stick it up (or put it inside a prop) as a remote button.

The idea being a general-purpose package, trading the ease of just pulling a standard part out of the kit, for the gains from optimizing for a specific need.

But it is in present form a little too large to be comfortable worn on the forearm, and a little too streamlined to be of use in other applications.  So I will probably have to make the pack a little larger, add more buttons and batteries, but run a wire for a remote sensor..even a wire down to a wrist band (or, for that matter, wires to glove contacts or a flexi-strip trigger finger sensor.)



I'm also putting my POV wand on back-burner.  When I get more time I'm going to dress up more-or-less the current one with double the LEDs, brass tubing, and some extra display modes.  But I don't think it will be in this show.



What will be in the show is the voice-controlled LED controller.  It is PWM, although you can hardly tell (I might need to massage the code some more with a lookup table).

AFTER mucking about with all that extra circuitry, THEN realizing I couldn't just port Arduino software into the chips I had on hand, I went back to the manual and finally got a grasp for how you handle the ADC (analog read) on the metal.

And.  Working within the Arduino abstraction makes for fast coding, but it is amazing what is being hidden from you sometimes.  In the AVR series -- even in the ATtinys -- the ADC already has multi-sample, adjustable gain, and the ability to operate as a comparator (aka you could feed it a balanced audio signal.) 

Plus I realized a while back the AVR has port protection diodes and resistor built in.  So, the only parts you really need are a couple of passives to build a basic low-pass filter (the ADC can run into aliasing problems on high-frequency inputs).

So I am very tempted to code up a better version of the circuit on the ATtiny85's that just arrived today.

Except what also came in the mail are my MAX485's for building a DMX512 receiver.  For which there is an Arduino library.  So perhaps I should toss the box, and put together an Arduino shield instead (containing DMX512 port, the heat-sinked TIP120, analog trim pots that allow real-time adjust of internal setpoints, and a proper kill switch.

Well, I do have to test the DMX circuit.  And see if I've got Arduino-Tiny working for ATtiny85's (with care I won't brick these, too!)  I'm not sure I'll need a voice-controlled 60v PWM in the near future, so it probably makes the most sense to finish up what I've got and put it aside.

Thursday, June 27, 2013

Sometimes Hacking IS Better

Embarrassing.

It took all of ten minutes to run a wire from my computer's headphone jack to an Arduino, hook up the strand of LEDs I have wrapped around the monitor (via a TIP120), and throw a bit of software together to blink the one in reaction to the other.

But I wanted something neater.  I wanted it bulletproof.  At the moment, I've managed to make something full of holes.  Instead of being bombproof, it looks like a bomb.  From a cheap movie.



A day or two to get it all working on breadboard; LM368 amplifier, LM741 balanced input stage, two trim pots for gain and DC offset, and another of those ubiquitous 7805 voltage regulators.

Of course when time came to put it in a box I tried to be fancy.  And thus blew a full day on just that.  Adafruit Altoids-sized perma-proto board.  Not quite enough room for jacks so I stuck Euro-strips on the outside of the mint tin.

First problem; the TO-220 style heat sink was too big to close the lid on.  I may hit five amps on this (I suspect it will be more like two amps in practice but I'm trying to play safe, right?)  So the box is without a lid.  And the components got even more cramped.

Second problem; I bricked both my ATtiny45's, and there isn't a simple way to port Arduino code (or use the Arduino IDE) on my one remaining AVR; a venerable ATtiny13.

I have some ATtiny85's on order (more 8 pin dips that will run at 8 MHz on internal oscillator).  In the meantime I had to get the rest of the circuit running, so I strung wires to an Arduino that very much doesn't fit in the box.

(This may still be a problem; the code will need adapting for even the ATtiny85's.  I might need to program in C anyhow.  But I lost my old AVR toolkit in a computer crash, and the new one seems to make it much more roundabout -- in the interest of "better and more efficient," of course! -- to access the registers.  Plus, I never did figure out how to work the ADC, which turns out to be rather more complex than doing PWM and other timer interrupts.)

Oh, and although it worked fine on the breadboard, the balanced input stage refused to work in the box.  And I rebuilt a duplicate on the breadboard and patched it in and it still doesn't work.  So I gave up and jumpered over it.

Which means at this stage, I could have just popped a shield on an Arduino and been done.  Not like I'm really going to need this specific circuit again, right?



It also struck me, as I was setting trim pots and juggling op-amps, that I could have just done the whole thing analog at this point.  I did it in digital because that gave me the ability to do signal processing in code; control the dwell time and so forth.  Except I'm learning that the audio signal varies so widely, software adjustment isn't sufficient.  I needed to have analog real-time adjustments (aka trim pots -- even if one of them is fed into an ADC port in order to set an offset number).

So no pictures for this entry.  I'm too embarrassed by the thing.  Pics after I've patched it up into something nicer.



So my other show-off project before tomorrow's meeting is to fake up the first DuckNode.  One problem; the only analog accelerometer I have is on my POV stick, and I don't want to take that apart just yet.  So I may have to wire up the I2C accelerometer, adapt the code to a friendly I2C library...and THEN wire the analog one on to an XBee.

The big question is whether I can remember what I'm doing fast enough to adapt my work-in-progress Processing framework to detect the remote accelerometer.  I could use another weekend to work on this!

Sunday, June 23, 2013

How Not to Use a Breadboard


So here's the proof-of-concept on the "Dalek Eyestalk" effect.  It is okay to do a proof this way, because all you are trying to find out is if a circuit can work.  Not if it does work -- that will have to wait until you've built it in more robust fashion.

1/8" audio jack from my computer, plugged directly into Arduino analog input.  A simple program averaging samples over a short stretch of time and setting a PWM value.  PWM output driving a TIP120 Power Darlington, which is switching a 12v power supply into a string of red LEDs.

On the output end, this will do.  I could put an opto-isolator in, or something else to protect the micro from accidents.  There is no back EMF from LEDs (unlike switching a relay or similar), and a single TIP120 will handle five amps -- plenty to deal with the strips I mean to use.  About the only major change to make for the final circuit is it would be good to bolt a heat sink on.  With 2 feet of LED strip, it didn't even get warm to the touch.

On the input end, though; audio is AC, so I'm exposing the input to negative voltage swings.  And I have practically no isolation, especially to keep hum from coming back into the rest of the audio system.  And I'm using a headphone output for the proof-of-concept, whereas the final version has to be driven off line level.

So I need to create a circuit that will at the very least buffer the input.  Better yet, isolate it, amplify it, and allow me to use a balanced signal line.  Op-amp should be perfect for the task.

But I wasn't really needing to get into audio electronics as well as everything else I have to solve to get this show up (including working out how to speak DMX512).



Oh.  The current form of the effects box is it takes audio output from the sound board (meaning I can use the sound board to compress/expand, set noise threshold, etc), and turns it into PWM for about 8 meters of LED strip that will be attached to the set. 

But it would be useful if we could also turn on the lights from the light board.

The simplest solution would be to stick an opto-isolator on an existing dimmer.  But that ties up a dimmer, and those are always in short supply.

Also fairly simple is to use the MIDI functions of the ETC "Express" series consoles.  Except.  It turns out they don't generate MIDI from channel.  Only from cues and from the bump buttons on the submasters.  Not quite as useful.  Worse, experience at various theaters is that ETC boards tend to bork when you ask them to send MIDI.  Receiving MIDI, they can deal with.  Sending it, though....bad.

From a systems perspective or operator's perspective, the simplest thing is to add the box to the DMX universe, with a unique starting ID.  Then it just gets controlled like any other light.  (You could even drive it like a smart fixture, with one or more DMX channels setting behaviors for the circuit).

But DMX is non-trivial on the Arduino (unlike MIDI).  It is extremely timing-sensitive, and most DMX libraries involve assembler or hacking the IDE or similar ugly stuff.  Plus the interrupt-driven nature of the critical timing means a lot of the expected Arduino functionality (delay loops, for instance -- although those are works of the Devil anyhow) go away.

It can be and has been done.  Even DMX reception and PWM output to lights.  It just is more involved than I would like it to be.

Oh.  And really requires an interface chip.  I'll order a few now, but I'm not expecting this is going to happen today.



I may or may not include MIDI functionality.  Once again, I had an existing MIDI circuit, but I'd done less good of a job of hanging on to the source code.  But once again, I eventually found the back-up.  It isn't the greatest MIDI read framework ever, but it is clean enough to use again.





Thursday, June 20, 2013

Wizardry

So now it is official.

I have several projects I've promised to work on, with a deadline already looming.

1.  Put LEDs on a hat.

2.  Put a strip light on voice control.

3.  Create a gestural interface.

4.  Keep working on my POV wand if I get time.


The first is a no-brainer.  My "Cree" 3W RGB's arrived in the mail, as did several meters of 6mm light pipe.  I can do this with some 1 watt resistors (I've learned THAT lesson!) but I'm holding out for the constant-current drivers I also put on mail order.  I don't have any strips or really many discrete LEDs, though.  I was pricing some nice 50ma greens in Pirahna style cases at Digikey, though.  But I think there's enough time to wait and see what the hat looks like before I have to make another shopping trip.

This might just be hard-wired.  The most complicated part of it might just be the power button.  But...I am fully prepared to go RGB with it, and do some chases or something.

At the moment I am lacking most of the "DuckNode" system I've been trying to design over the past few years.  Which is to say; I don't currently have a plug-and-play solution to send wireless control out to the stage and into a costume for an effect like this.

(The local Orchard Supply Hardware just got some vintage style oil lamps in stock.  I'm pretty strongly tempted to stick a Cree, an ATtiny implementation of my custom "blink" code, and an XBee for hand-rolled remote control into one of them.  Except the oil tank is small enough I'd probably want to use a LiPo for power, with a USB jack and a charging circuit.  And that makes this an $80 lamp....)



2.  I picked up an 8-channel EQ display chip, which should do just fine for a voice control.  Except it spits out analog data serially, which means you pretty much want a micro to control it, and I'm starting to run out of free AVRs (Arduino or not).  I may just have to solder up another protoboard with an ATtiny on it, and do more pretend-it-is-an-Arduino clone because I just don't have time to think my way through straight C and the AVR toolchain.

The tough part is controlling anything more than a strip of LEDs.  I've been reading up on triac dimming, and I'm really not wanting to do hasty work around raw AC.  So the best options at this point for controlling lights at line voltage are;

     a.  Power Tail.  This is a plug-and-play relay box for 15 amps.  At 10ms cycle time it isn't going to do PWM, but I'm willing to bet that timing a pure off and on is going to be close enough for a Dalek Eyestalk effect.

     b.  Cheap Dimmers.  American DJ type baby dimmer packs are as cheap as ninety bucks on Amazon.  Only four channels at a whopping 5 amps each, but it is a solid metal case and more-or-less UL listed.  Also the one I'm looking at will take a 0-10 analog control signal, which is easy enough to fake up with an external power supply and some TIP120's.

     c.  Learn to talk DMX.   It has been done on the Arduino, and there are even libraries.  It isn't trivial -- in several ways it is more messy than MIDI.  Of course, some packs you can talk to with MIDI as well but spitting a LOT of MSC (MIDI Show Control) at an ETC light board is a good way to crash the board and stop the show.  So, yeah...I'll pass on the odd MSC for a single effect, but trying to do a PWM-type effect that way would be even more stupid than wiring up my own TRIAC-based danger shield.


3.  And this one gets interesting.

I've already shown I can detect a punching motion (which is a possible "Tim the Enchanter" magic gesture.)  Not sure exactly how best to detect a snap of the fingers, if that's how the actor wants to go. 

Hold on.

Ouch ouch ouch.

Yes...I can detect the jarring motion of a snap of the fingers with an accelerometer mounted in a wrist band.  Or attached to a wrist with masking tape.  Ouch ouch ouch.

The tougher part would be discriminating -- better, specifying -- which of several DIFFERENT effects is to be triggered via pointing at them whilst gesturing.

The simplest solution is a compass.  Or, rather, a tilt-compensated triple-axis magnetometer.  Which aren't that expensive, but only talk I2C.  Which means I can't talk to those naked, either.  I'd need an AVR to read off the magnetometer chip, and then tell the XBee to transmit the signal to the interpreter.

Which in any case brings me squarely back to the DuckNode, and the Processing host software I have only begun to write.  (Current status?  I can select available ports on the fly, and recognize individual pin status).

Fortunately, #3 is completely stretch goal.  First I need to get that voice-activated light working on the breadboard.  Maybe I'll even dare some 120v relays -- just for a proof of concept, mind you.




Oh, and I realized something about the POV circuit.  The persistence of vision effect only takes place over 250 milliseconds.  But if a lighting effect is much brighter than the ambient light, you will also get afterimage.  Which can linger quite a bit longer.

In daylight, my POV circuit isn't particularly good.  In a darkened room, I "see" clearly a swath that crosses most of my body. 

So it is still plausible.  The next iteration I'm moving the LEDs closer together, find some way to diffuse them a little (they are a little too much brighter on-axis now), and increase their number.  I'm also finding that arbitrary patterns read better than words.  But at some point I really need to write a Processing routine that will turn a bitmapped image into a binary string, because I'm getting pretty tired of counting rows by hand as I manually type in 1's and 0's.

Tuesday, June 18, 2013

De-Bouncing

So you have this effect.

In a recent effects meeting, we were talking about a lighting effect attached to a costume.  If, we said, you put electrical contacts in the costume gloves, you could light the light by holding your fingers together.

That's clever.  It works well enough.  But make it two lights, and it starts to get complicated.  That's two gloves, two sets of contacts, and if the actor forgets themselves and presses their fingertips together...you've cross-circuited.  Which may be the all-purpose problem solver on Star Trek The Next Generation, but works out badly in the real world.

This is yes another case of being unable to uncouple in your mind the physical trigger (call it a button, or call it a sensor),  and the resulting effect.  If all you are using is basic circuits -- battery, light, switch -- then this is indeed how it works.  But...an awful lot of theatrical effects -- including lighting effects built into costumes -- are already complex effects.

Which is to say; they already include intelligence.  And if your costume or prop already has an embedded micro-processor chasing a string of LEDs or counting down on a 7-segment display, it is just plain silly not to break out another pin for control input.

And, actually, circuits are cheap. A switch needs to be mounted to something solid, and needs to be soldered in.  Solder in a stand-alone capacitance sensor breakout board, and you have a latching circuit for your light.  The same number of wires to solder, it is no more expensive than a good switch, AND it doesn't need a good surface to be mounted on.  And it has no moving parts to break or snag.

More discussion, and more rant, below the fold.  No circuit diagrams this time around, though.



Sunday, June 16, 2013

Equivocation


There's science, and there's engineering.  My latest project has been illuminating as to the difference.

I've been trying to make a POV device.  Specifically, the idea was a wand or staff that would be used in some capacity in our next production, "The Wiz."  What I chose to display would depend on the quality and detail of the display.  The first step was to see if it would actually work.

My latest setback is running foul of the Weak Equivalence Principle.  But this is, oddly, the first problem I've had with the science of the thing.  Everything up to this point has been, well, errors in execution.

Here's the current test platform:




Friday, June 7, 2013

...and fricken lasers!

I keep hoping to get started on that machine gun.  Or, for that matter, the drum magazines.  But the hot projects on the table now are lighting effects for the next production.  So my work table is covered in light pipe, LED's, Arduino clones, and laser diodes.

Like these:





Wednesday, June 5, 2013

"Geordi, reconfigure the Main Deflector Array into a Phase-Variant Hotdog Grill"

More and more, it seems everything I'm working with is defined as much by the software layer as by the hardware.  I'm just soldering up some color-changing lightpipe for a possible effect.  The actual hardware -- the Cree 3W RGB and the flexible lightpipe from SparkFun --  may take longer to assemble, but the important behavior is all in the software.

When I get my prototype DuckNode working, it will be even more weighted towards the software layer.  An hour or two to wire up the XBee and accelerometer.  Days or weeks to develop the software in Processing.

And there's something about this kind of flexibility -- this idea that essential behavior is determined by small and easily edited lines of code -- that seems very hard to communicate some times.




Or another way of putting it; The switch can be whatever you want.

I was at a meeting today, and we were discussing some effects options.  And I could tell that part of the consideration was how the actors would be able to reach tiny switches and click through options -- because that is how most consumer "novelty lighting effects" are built.

Even when I bring out a demonstration, whether proof-of-concept or breadboard or even a packaged-up, "This is how we used in in the last show" form, it seems that the interaction with the device is seen as fixed.  However you turn on the demo, that would be what the actor had to do.

Which is exactly opposite from the perspective of a hacker/builder.  For us, the control is whatever we slapped on once the rest was working, and the particulars are unimportant because it is easy to change.



Re-visiting the idea of Theater As a Hackerspace, that's the essential difference between a store-bought effect and one that is built from components.  You understand the component one.  You build in what you want.  And you can change it just as easily.

And this doesn't even get close to understanding the potential of linking effects.  Because when your perspective is purpose-built, store-bought items, any control strategy you come up with to link effects is going to be cobbled-up.  And I've seen some pretty sophisticated cobbling (one of my favorites was a video system that used three DVD players stacked in front of a single remote control taped to a desk, so they would all play and pause together).

It seems extremely difficult to get the non-geeks to break out of that paradigm and think the way we do; that is it all software. 



I've seen these conversations in a hundred places.  The paradigm is so deeply seated there isn't even a pause as the idea of modification is discarded.  Instead the conversation leaps to how to, say, adjust the actor's movement to make up for the fact that the effect turns on instantly.  It is as if the conversation starts in the middle; "So the actor will turn it in his hand to hide the way it comes on, and..."

So you've moved past the place where you can say, "I programmed it to come on instantly because that's what I thought we wanted.  It is one line of software.  Five minutes to upload new code, and it will come on slowly instead."

"No, that's too complicated.  We already figured out how to work around it."

Well, okay.  Often the work-around is fine.  But that means we are also skipping past more subtle possibilities that would give us an enhanced artistic control.  Maybe there's a potential gag where the actor presses the button, no light, then it flickers.  He shakes it.  Flickers again.  Hits it, and now it comes on. 

If you didn't realize all of that behavior is, potentially, five minutes of uploading fresh code, you don't realize you could have this kind of moment.

And, yes, we modify props.  We modify costumes.  We modify the score, the programming on the lights.  Those don't seem to be "too complicated" to ask for adjustment on. 




But maybe there's a subtler effect in play here.   Software is peculiarly ephemeral.  To the masters of the software (as if!) this flexibility is an asset.  Since we think in terms of the core behavior, changing the desktop theme is unimportant.  We adjust easily to thinking of the red button as on and the green button as off instead.  Or whatever else we changed, created, or just plain got wrong and are too busy to correct just yet.

For others, the territory is dark and the map is the only key.  Thus the map becomes in itself a kind of territory.  Turning the light "On" becomes immiscible with moving the switch UP.  Instead of dealing with the concrete of the actual light, they work instead with the abstraction of the instruction.

I worked at a place where this mindset was peculiarly endemic.  Everything was done there by incantation.  At one point a horrific confusion arose over how to turn on the backstage work lights.  Or, to be specific, which direction the switch needed to be flipped in order to turn the lights on.

From a systems perspective, the answer is generated ad hoc.  The necessary position for the switch is the one it isn't currently in.  (Unless the lights are already on...then you just leave the switch alone!)

For someone working by incantation, however, this is unacceptable.  There has to be a specific switch position that becomes, "The Lights Were Turned On" (regardless of whether any lights, out here in the non-solipsistic world, actually turn on.) 

Should I mention there were two switches, at either end of a hallway, wired as a reversing switch?  (Aka the lights could be toggled in state from either end of the hallway.)  The answer in that building, sadly, was to put tape over one of the two switches.  That was the only way to ensure that the answer to "Turn the Lights On" was always and would always be "Turn the Switch to Up."

And, yes, even with a much milder form of that disease, the idea that the control layout and other operator-interface behavior can change or be changed at a whim is, perhaps, not exactly a selling point.  Particularly when some of us are so lax with our documentation!





In any case, the trick now is getting the rest of the production team to realize that all those wonderful silly things in stores with things that light up or react to a voice or turn their heads can be done at a component level and, thus, be equally flexible as hand-built costumes or constructed scenery.

But then, this was sound a decade ago.  Now directors are entirely too used to the idea that they can ask for a sound effect to be faster, slowly, placed differently, in a different key, or whatever.  And that this change can take place in time for rehearsal!




Perhaps it is just as well that the flexibility of the software layer is as yet mostly untapped...

Monday, June 3, 2013

Diagesis Lifts the Lamp

I got interested in sound design because I wanted to create sounds.  Complex sounds.  Realistic sounds.  Soundscapes of imaginary environments and small stories in sound.  This was a time of a mild renaissance in radio theater; the radio Hitchhikers' Guide to the Galaxy, the Star Wars radio play, and of course new airplay for the old standards; The Shadow, The Whistler, Flash Gordon, and so on.

Those are still the kinds of sounds that are the most fun to make.  Effects that describe an imaginary object or scene in detail, like the "street sounds in Amsterdam" cues from Anne Frank or the "two boats collide during cargo transhipment" from Mister Roberts.  But those are also the effects that feature less and less in my work.

When I was doing mostly plays, I composed original music.  And I worked hard to achieve realism in the effects, worrying about placement and acoustic cues.  As one for-instance, if there was a radio or record player or intercom on stage, I'd put a speaker in it or as close to it as possible.  For period telephones we went even a step further; instead of playing a recorded sound, we'd generate a Bell-spec ring voltage and send that to an actual phone, on stage!

I of course hunted up lots of effects, and did quite a bit of recording of my own.  For a friend's design of Mice and Men I dragged the cast out on to the lawn and staged an recorded an impromptu game of horseshoes (using stage weights and a bit of pipe!)

But then came musicals.

Musicals of course have much less space for the sound designer to create elaborate underscores and transition music.  They also force effects cues to take notice of the musical environment.  I am often asking the music director for the key of the music before I create and use an effects cue that has a defined pitch.

As I've mentioned before, musicals also force most effects to be shorter and bolder.  You don't have the sonic space, or the time on stage, for a car to be heard in the distance on a gravel driveway, pull up, brake, engine off, doors open, etc. (the kind of story I'd tell in a straight play).  Instead you have two seconds, in the middle of a bunch of music, for the effect to say, "CAR'S HERE."

Subtlety is wasted.  Complexity must be eschewed.   And although you may still tell little stories in sound, or present environments, they have to be simplified.  A  lot.  Instead of six layers of rain, wind, water on windows and water in gutters, there is one or two.  Instead of hearing someone pick up a phone, get a dial tone, dial, and a ring or two, you reduce to one rotate of the dial and then go to the voice.  It is Expressionism in sound.




The next step away from realism -- and diagesis -- comes with the company I do most of my work with now.   This is a company that doesn't hide the reality of actors on stage.  They don't try -- usually -- for flawless magic and high-tech effects.  Instead they use dance and gesture and fabric and simpler tools to tell the story.  And by doing so, they retain focus on the actors and the experience of watching a live play.

Not to say I'm not still, sometimes, designing realistic cues.  For "The Sound of Music" all the cues were diagetic; they were the sounds that were heard by the characters in the play.  But for most shows, I have to describe the cues I create as Presentational.

Our Willie Wonka was strongly 70's.  I chose to use sound effects the way they would be used in period television drama and music albums.  Most of the effects were distinctly electronic in nature.  The "realistic" effects -- aka sampled sounds instead of synthesizer sounds -- were in short snippets, dry and compressed, in the way short audio samples were used in recordings of the period.

For Alice I am taking it a step further.  The framing story of that rather peculiar Disney script has ALLISON texting and Facebooking and Tweeting -- and losing her sense of self in all the media drama and high-pressure salesmanship of the online world.  This carries through, in that it is easy to think of all the experiences of Wonderland as being a dreamlike mish-mash of her online experiences.

Heck, for that matter, Alice's travails to find the right combination of Shrinking Potion and Enlargening Cookie that will get her through a tiny door with a large key on a tall shelf has always felt way, way too much like the kind of action in those horrible platform games.

So I chose to think of the sounds as the sound effects for a video game.  So even though some of them are linked to events or even sound producers on stage, they are artificial; loud, played without any attempt at placement in space, compressed and unrealistic.  When the WHITE RABBIT jumps in the hole, there is a loud "Boing!" sound like Qbert or something.  When ALICE bites into a cookie, there is  the kind of short (and oft-repeated) eating sound you'd get in an old-school RPG when your character picked up a handy apple for a couple of quick health points.

And, yes, I am extending the pallet far earlier than the up-to-the-minute iPad universe of the framing story.  I am using in two places, for instance, the distinctive sound played out of ROM when an early MacIntosh fails to boot (the Centris "bing bobbly bump" and the 6100 series "tires screech and crash" sounds).

The earliest part of the show is the most diagetic, in that ALLISON is actually on phone and computer. I assigned various Keyclick, New Message, Joining Chat, and other sounds to my little Korg Nanokey, and am improvising those along to the action.

You would think that the sounds of texting on an iPad would be easy to find.  Not so much; I don't own one.  It took a fair bit of searching, mostly through forums where tech geeks were hacking a Newton or similar to have a similar GUI experience, before I found the samples I wanted.  Because, of course, this had to be right; half the cast, most of the production staff, and at least half the audience owns and uses Apple devices and for the design to work I needed these sounds to be instantly recognizable from their experience.  Even if the experience presented on stage is abstracted somewhat.

(Yes, the simpler solution technically would be to borrow the iPad from another production member, and hook up to the headphone jack; the loss from going analog is completely acceptable for this application.  Except tech was incredibly hectic and stressful, all the production staff were running about madly -- and using the gear they had brought almost constantly -- and even though I was stuck at the sound board for hours at a time I could still spare a moment here and there for a web search.  So pulling the sounds from online sources was actually the most efficient solution, oddly enough!)



This seems to be a continuing arc.  The next big design up is The Wiz.  For that, I really want to work closely with the band.  I want some big showcase effects, but I also feel strongly that for musicals, especially musicals where the stage effects are treated non-realistically (as in, fabric and acting for Willie Wonka's Chocolat River and Pink Candy Boat, and child actors for the Nut-Sorting Squirrels), the sound effects should be a part of the music.

Which is to say, either performed by the pit, or executed in synchronization with the pit.  Or musical in nature (and, again, coordinated with the pit).

What I haven't worked out is how to do this.

The way production and tech falls out at my current design venue, the Music Director is under a lot of stress already.  They have to somehow adjust to the hundreds of changes that keep flying past, with often as not frequent requests for entirely new music (often to fill an unexpected scene change, but also to follow an extended dance break that got added at some point in development).  By the time I get in there ready to fold in effects, the pit is over-tasked and pretty thoroughly confused, and wants nothing more than to somehow manage to edit down their thousands of scribbled notes into something they can actually play.

The flip side of this -- as I discovered in Willie Wonka -- is that after we've opened and the band gets comfortable with the material they are dealing with, they start expanding into the formerly bare sonic spaces.  Often as not over-writing the material I spent tens of hours creating.

And I don't complain, because, as I said, it sounds better when it is from the pit.

But it makes the entire process even more frustrating for the Sound Designer.