Wednesday, June 5, 2013

"Geordi, reconfigure the Main Deflector Array into a Phase-Variant Hotdog Grill"

More and more, it seems everything I'm working with is defined as much by the software layer as by the hardware.  I'm just soldering up some color-changing lightpipe for a possible effect.  The actual hardware -- the Cree 3W RGB and the flexible lightpipe from SparkFun --  may take longer to assemble, but the important behavior is all in the software.

When I get my prototype DuckNode working, it will be even more weighted towards the software layer.  An hour or two to wire up the XBee and accelerometer.  Days or weeks to develop the software in Processing.

And there's something about this kind of flexibility -- this idea that essential behavior is determined by small and easily edited lines of code -- that seems very hard to communicate some times.

Or another way of putting it; The switch can be whatever you want.

I was at a meeting today, and we were discussing some effects options.  And I could tell that part of the consideration was how the actors would be able to reach tiny switches and click through options -- because that is how most consumer "novelty lighting effects" are built.

Even when I bring out a demonstration, whether proof-of-concept or breadboard or even a packaged-up, "This is how we used in in the last show" form, it seems that the interaction with the device is seen as fixed.  However you turn on the demo, that would be what the actor had to do.

Which is exactly opposite from the perspective of a hacker/builder.  For us, the control is whatever we slapped on once the rest was working, and the particulars are unimportant because it is easy to change.

Re-visiting the idea of Theater As a Hackerspace, that's the essential difference between a store-bought effect and one that is built from components.  You understand the component one.  You build in what you want.  And you can change it just as easily.

And this doesn't even get close to understanding the potential of linking effects.  Because when your perspective is purpose-built, store-bought items, any control strategy you come up with to link effects is going to be cobbled-up.  And I've seen some pretty sophisticated cobbling (one of my favorites was a video system that used three DVD players stacked in front of a single remote control taped to a desk, so they would all play and pause together).

It seems extremely difficult to get the non-geeks to break out of that paradigm and think the way we do; that is it all software. 

I've seen these conversations in a hundred places.  The paradigm is so deeply seated there isn't even a pause as the idea of modification is discarded.  Instead the conversation leaps to how to, say, adjust the actor's movement to make up for the fact that the effect turns on instantly.  It is as if the conversation starts in the middle; "So the actor will turn it in his hand to hide the way it comes on, and..."

So you've moved past the place where you can say, "I programmed it to come on instantly because that's what I thought we wanted.  It is one line of software.  Five minutes to upload new code, and it will come on slowly instead."

"No, that's too complicated.  We already figured out how to work around it."

Well, okay.  Often the work-around is fine.  But that means we are also skipping past more subtle possibilities that would give us an enhanced artistic control.  Maybe there's a potential gag where the actor presses the button, no light, then it flickers.  He shakes it.  Flickers again.  Hits it, and now it comes on. 

If you didn't realize all of that behavior is, potentially, five minutes of uploading fresh code, you don't realize you could have this kind of moment.

And, yes, we modify props.  We modify costumes.  We modify the score, the programming on the lights.  Those don't seem to be "too complicated" to ask for adjustment on. 

But maybe there's a subtler effect in play here.   Software is peculiarly ephemeral.  To the masters of the software (as if!) this flexibility is an asset.  Since we think in terms of the core behavior, changing the desktop theme is unimportant.  We adjust easily to thinking of the red button as on and the green button as off instead.  Or whatever else we changed, created, or just plain got wrong and are too busy to correct just yet.

For others, the territory is dark and the map is the only key.  Thus the map becomes in itself a kind of territory.  Turning the light "On" becomes immiscible with moving the switch UP.  Instead of dealing with the concrete of the actual light, they work instead with the abstraction of the instruction.

I worked at a place where this mindset was peculiarly endemic.  Everything was done there by incantation.  At one point a horrific confusion arose over how to turn on the backstage work lights.  Or, to be specific, which direction the switch needed to be flipped in order to turn the lights on.

From a systems perspective, the answer is generated ad hoc.  The necessary position for the switch is the one it isn't currently in.  (Unless the lights are already on...then you just leave the switch alone!)

For someone working by incantation, however, this is unacceptable.  There has to be a specific switch position that becomes, "The Lights Were Turned On" (regardless of whether any lights, out here in the non-solipsistic world, actually turn on.) 

Should I mention there were two switches, at either end of a hallway, wired as a reversing switch?  (Aka the lights could be toggled in state from either end of the hallway.)  The answer in that building, sadly, was to put tape over one of the two switches.  That was the only way to ensure that the answer to "Turn the Lights On" was always and would always be "Turn the Switch to Up."

And, yes, even with a much milder form of that disease, the idea that the control layout and other operator-interface behavior can change or be changed at a whim is, perhaps, not exactly a selling point.  Particularly when some of us are so lax with our documentation!

In any case, the trick now is getting the rest of the production team to realize that all those wonderful silly things in stores with things that light up or react to a voice or turn their heads can be done at a component level and, thus, be equally flexible as hand-built costumes or constructed scenery.

But then, this was sound a decade ago.  Now directors are entirely too used to the idea that they can ask for a sound effect to be faster, slowly, placed differently, in a different key, or whatever.  And that this change can take place in time for rehearsal!

Perhaps it is just as well that the flexibility of the software layer is as yet mostly untapped...

No comments:

Post a Comment