Sunday, June 16, 2013

Equivocation


There's science, and there's engineering.  My latest project has been illuminating as to the difference.

I've been trying to make a POV device.  Specifically, the idea was a wand or staff that would be used in some capacity in our next production, "The Wiz."  What I chose to display would depend on the quality and detail of the display.  The first step was to see if it would actually work.

My latest setback is running foul of the Weak Equivalence Principle.  But this is, oddly, the first problem I've had with the science of the thing.  Everything up to this point has been, well, errors in execution.

Here's the current test platform:







So.  Persistence of Vision devices work on the principle that the human eye/brain "sums up" imagery reaching it over a span of time.  Perceptually, visual stimuli appearing over a certain narrow slice of time is constructed as a single image.  On this principle is based the movies, and video.  And it is also the physical basis of the "motion blur" effect as seen in comic books and re-constructed in digitally rendered images (film automatically smears moving objects in a somewhat similar way).

As it is used today, POV also refers to a class of device which leverages this principle to produce the illusion of a bitmapped image from a single row of LEDs in fast physical motion.  The desktop and similar display models rotate a stick of LEDs at high speed.  The "SpokePOV" from Adafruit uses the rotation of a bicycle wheel for similar effect.

I did a quick bit of math.  According to Wikipedia the POV effect covers stimuli within about a 1/25th of a second window.  A typical baseball bat reaches 60 MPH at the moment of impact.  Assume 1 meter of travel from one side of the body to the other...convert a whole bunch of units...and theory says I should be able to wave a stick fast enough to get 20-60 cm of "image trail" behind it.

Which is enough to try a proof of concept model.




I had a 4' chunk of PVC pipe lying around, so I taped eight super-bright LEDs to it and soldered up some ribbon cable.  My original thought was to keep the circuits at the other end of the cable while I was experimenting, but waving the stick was way too vigorous for that.  So I had to hot-glue the circuit board and a battery pack to the prototype before I could wield it properly.

The first program was a hardware check; light all the LEDs in sequence.

Next program was to chase them at an arbitrarily fast speed.  Waving it around, I saw a clear zig-zag line in after-image.  So the principle was good.  It was worth moving up to the next step.




Unfortunately, this uncovered the first major flaw in the hardware.  First, most dot-matrix display alphabets are on a 5 x 7 grid.  Not a 6 x 8. 

Second, I felt I was going to need the power of direct port access (besides, it made for neater code).  The Arduino "digitalWrite" command is something like 30 or 40 opcodes.  "PORTD = B00001111" compiles to much fewer, and also runs simultaneously -- no slanted letters.

But when you go by data register, the first 8 I/O pins have to start with pin 0.  I'd started with pin 2 so as to leave the serial I/O uncovered.

The latter was a (relatively) quick re-solder job.  The former meant I ended up drawing up my own character set from scratch.

The eventual software will parse an ASCII string and use the 2d matrix as a lookup table.  I'm using that matrix now, but I'm merely stepping through it.

I also wrote the matrix with "0" as LED off, and "1" as on, having forgotten I was sinking the LEDs instead of sourcing them.  So I needed to look up bitwise math again and convert the byte from the lookup table into a bit inverted version.




And here's where engineering turns into those practical nuances that really drive production.  Because I was designing this to be visible from stage distance, with LEDs spaced at 3 cm and a batter's zone of a couple of meters, there was no good way for me to see how it actually looked while sitting at my desk in a small apartment.

So I dragged the circuit out to the theater, set up a mirror from the dressing room on stage, backed off twenty feet, and played Star Wars Kid until I was thoroughly winded.

It was enough to show that I could get a visible pattern, and it was long enough to form at least part of a word.  But it also showed there were enough errors in the current software I needed to take it home and re-write.  And that there was a missing element in the hardware.




I had predicted and projected that to get a readable word (as opposed to an arbitrarily repeating pattern -- hence the design of the display matrix around adjacent 8 x 8 grid elements) I might need to install some sort of trigger.

The first sanity check here was to code up a downbeat.  The LEDs would flash three times in preparation, then fire off a single word.  This was good enough for me to confirm at least part of a word was appearing in a form that made it possible to read it.

I didn't think it would work, but the next obvious thing to try had to be a trigger button.  Well, it was just possible to hit the button as you waved the staff, but it was so tough to coordinate I immediately wrote that off as plausible for stage use.  It needed to be more automated than that!

Which I'd already anticipated.




I'd picked up a cheap accelerometer from Modern Devices at the previous Maker Faire.  The labeling was a little unclear, but after looking at the schematic I had enough confidence to string the wires.  I commented out the code I was working on and wrote a simple sketch that displayed more or fewer LEDs depending on the output in my selected axis.

For those new to that technology, the modern on-chip accelerometer is a bit of nano-scale machinery; inside the chip are some tiny weights suspended on flexible arms.  As acceleration is imposed on the device, the arms flex, and the chip generates an output -- for the cheaper models, an analog voltage for each sense axis.

Since I knew the accelerometer was a 3.3 V device, and I was running my breadboard Arduino off a 4.5 V battery pack, and the Arduino's analog inputs are 10-bit ADCs, a quick bit of math told me how much to divide the resulting value by to get the theoretical range to spread across my 8 LEDs.  Well, actually, I just assumed my swing would center at 1/2 the voltage, and then divided by 8.  But it came out close enough; when I fired up the circuit, four LEDs sprung to life.  And when I moved the wand, the number of LEDs changed.

Except.  Except.

I'd forgotten the Weak Equivalence Principle.

"Experiments performed in a uniformly accelerating reference frame...are indistinguishable from the same experiments...in a gravitational field..."

The moment I realized what I'd left off, I slowly rotated my test platform around my selected axis of sensitivity.  And, yes; the LEDs very neatly indicated the presence of a uniform acceleration of 9.8 m/s/s.

So now I have to figure out the right software/hardware tricks to distinguish between the relatively weak acceleration imposed by human muscles, and the acceleration -- albeit appearing at the inputs at an arbitrary and always-changing axis -- imposed by the 10^24 kilograms of dirt and rock under my feet.




 So.

The proof-of-concept proofed.

 Masking the planet turned out to be as simple as keeping the stick mostly vertical, at which point the imposed acceleration swamps the residual.  A further refinement is to subtract an inverse of the Y axis; detecting how gravitational acceleration falls off along the Y axis when the wand is tilted, and correcting the zero point on the X axis proportionally.

In the current proof, the stick triggers at about .6 G's, waits forty milliseconds, plays the animation at a set rate, then is disabled for 120 milliseconds in order to mask the stopping transient.  This means it can't really be whipped back and forth, not for the word animation.

With space to swing and a vigorous full-arm motion the POV trail gets most of the test word, "WIZARD" into visibility.  With a more comfortable, less-vigorous motion, you get about two letters at a time.   At the current spacing of 3cm, the word pattern doesn't become clear enough to visualize until you are at least fifteen feet away.  And there is still a raster line at thirty.

Presuming I continue with the experiment, the next iteration is tighter spacing.  And probably increase the number of LEDs.  The ATmega chips can drive 14 rows native, through direct port manipulation.  After that you'd need some form of expansion or multiplexing.  At over double the pixel density, the refresh rate would also double, which should clarify the graphics as well.

Large graphics may be more effective than words.  

Of course, the most effective display would be one that appeared static in reference to the world.  This would create the illusion of the stick "wiping" or "revealing" the imagery, and I believe would make it much easier to understand longer chunks of text (or more complex symbols).  It would also allow continuous back-and-forth motion.

But that basically requires writing in software an inertial navigation system.  It doesn't have to be that accurate; slippage is perfectly fine, as long as the value of the slippage is not too high.  Since the assumed writing axis is horizontal, could correct for tilt by consulting the other axes of the accelerometer.

But am I up for doing that?  We had the meeting, but I didn't get a chance to show off the test platform to the director.  So I don't know if there is any place for this particular gadget in this particular show.

No comments:

Post a Comment