I am tasked to build a couple of hand props. They don't have to stand up to much abuse (because they are basically demonstration items for a pencil-and-paper gaming group) and they don't have to have functional electronics (but they will, because that part is fun, and I'm resume-building, and "Will it light up?" is the second-most frequent question asked on The Replica Props Forum. )
I know they need to be delivered by October, and the cost needs to be under $200 each. That is enough to begin planning the planning.
I've decided to use a convergent method to plan these. Since they are basically boxes with some knobs and lights on them, the first question was what kind of planning was necessary for these controls.
There are three basic strategies for controls and similar technical doo-dads an actor has to interact with:
1) Hide the function by using unlabeled buttons, okuda-grams, and similar.
2) Figure out what the controls actually do.
3) Ignore the problem completely.
A hybrid form is when the controls were simply assembled randomly but either fan canon or the actors themselves figure out the specifics of what some of them do. See "Galaxy Quest" for one of the great variations on this.
These being demonstration props for a bunch of technically-minded people, the second option was clearly the only viable one.
So the next question was how much did I need to research before I could make the controls realistic? Fortunately, these are rather peculiar props. Essentially they are techno-magic, and what it boils down to, is they are driven by game mechanics. I don't need to figure out how a real-world device might detect a variety of radiological threats and display them; I needed to figure out the most efficient yet interesting button layout for a "I cast a 'Detect Radiation' spell on the room."
In case you hadn't figured out, the game in question is the Morrow Project. The devices are the CBR kit and the Med Kit that are carried by individual Recon Team members.
Which means the practical functionality is well supported by the "I'm just a game mechanic" nature as described in the rules. They have much larger and more sophisticated devices to give them detailed information about CBW threats or to deal with medical problems. The primary and sole significant function of these particular devices is to flash a light when there is a threat, and administer a one-dose-fits-all cure when appropriate.
In the real world, a portable detector would need all sorts of ways to adjust it and tune it, and there would be long lists of exceptions and false readings. In this case, although the boxes are vintage 1970's military olive-drab, what is inside comes From the Future. So we aren't saddled with the realities of test sources and calibration knobs and filters that need replacing. It just "works."
And that removed the need to actually learn anything about real-world radiological threats and the detectors thereof. All I needed to know is that a big light would light up when there was "bad stuff." The details were inconsequential, and the actual prop did not include any of them.
What largely remains is developing the construction method. I still haven't completely planned the plan. I've sketched possible control layouts, and researched possible components. What I believe is that at some point I will be able to create a scaled drawing. What I don't know yet is how far I will need to plan before I can order components; much depends on how accurate the dimensions need to be, and if I can get those accurate dimensions without the actual parts.
I am looking at using VFDs for the displays. The expanded description I am working from posits 7-segment displays, and in the period in which the external case is made displays are moving away from LED (although the brighter green LEDs are used in a couple of places as a replacement for the old red ones), and old-style LCDs are becoming more common. VFD's are the cutting edge for "cool," and found mostly on applications where plenty of power is available (aka rarely on hand-held devices).
But there are external constraints. Old-style LEDs are hard to find. They are also dim, and although this is a more accepting audience than the usual, there is always a degree to which you want to skew reality to better fit modern expectations. A simple red seven-segment, or LCD, doesn't have the "cool" factor of a blue-green glowing VFD.
On the other hand, the VFD requires more work to supply the right power and to interface with.
In ranking of simplicity, the ways of accomplishing the display are something like:
1) No display at all (aka just a blank faceplate).
2) Static display; an image with a light source behind it.
3) Modern LCD display or OLED. The latter in particular has potential as basically pretending to be some class of older display, although the pixels are liable to give it away. The prime difficulty with the former is that most modern LCDs are supertwist, designed to be used with integral backlighting. They are also almost entirely 2-row type, PLUS the standard form factor is simply too long to fit on the box!
4) Color OLED or LCD, displaying image of an older display.
5) 7-segment LED constructed from modern modules
6) VFD.
Since I don't really know enough to plan at this stage, I may have to break off getting a dimensioned faceplate design together and instead work on lighting a prototype display.
And that's about as far ahead as I can work. Any more planning is liable to be in error. So the plan of the plan is to stop planning at this point and start experimenting!
Tricks of the trade, discussion of design principles, and musings and rants about theater from a working theater technician/designer.
Monday, July 30, 2012
Run, Robot
Consider this post a re-visit of my "Theater Should not be a Hackerspace" rant.
I'm in the middle of the run of a very complicated show that was thrown together too fast on too small a budget, by a company that has fallen in to a pattern of doing that. The props and effects, some of the costumes and set dressing, and other basic technical elements are held together with gaff tape and hot glue.
My robot is one of these elements. Although I spent the money and did the research for a robust chassis and control linkage, what we had time for is a $40 R/C car chassis carrying the lightest extruded-foam body I could build on top of it. Even a fiberglas shell would be too heavy for the undersized motor -- and at that, I had to tighten the suspension with a few strategically-placed zip ties.
It has managed to survive running into scenery a few times and being kicked by actors more than once. It only failed to make an entrance once in the run -- due to the antenna on the transmitter being snapped off thus limiting control range to about two feet. That required a last-minute frantic repair with brass tube, epoxy, and a quickly-soldered wire to bridge the gap.
The lights also failed once -- in the same performance the projector douser failed -- and this is exemplar of what I was talking about. There is a real difference between a "hacked" solution that will last for a quick shoot or the first day of a convention, and an engineered solution that will work reliably through a five-week run.
The best thing I can say about the douser is that it was already built. I had to rip off the flag and epoxy up a new one to fit the particular projector, and I had to modify the software to use an external switch instead of an external button, but that took under an hour. The rest of the servo housing and power supply and software and so forth was already built.
If nothing else, this proofed the concept of having ready-to-go solutions. My Duck Remote is the same; the XBee solution was already built and programmed and all it took to get to technical rehearsal was cramming the electronics into a different housing. (On the other hand, I did spend several days making a different software solution so as to simplify the hardware end -- and that only partially worked). But as robust as my devices are, they still fall short of the reliability I desire.
I don't at this point know if the douser fouled, overheated, or if the servo needs to be lubricated, but it failed once and the servo is visibly and audibly sticking at one point in its travel. Probably, if I simply swap out the servo for a new one it will make it to the end of the run without further failures, but that underlines the point. I simply can not trust it to keep working for an indeterminate amount of time. And I lack knowledge on how it failed, thus I can neither change it so it doesn't fail, or even predict mean time till the next failure.
In the case of the robot, I do not know why the lights failed. In establishing communication with the onboard Arduino to interrogate it over serial monitor I reloaded the software. And it started working again and hasn't failed since. So I don't know what happened. The only suspicion is that somehow the software got corrupted. But it is also possible that it doesn't always boot up correctly, or that there is a loose connection that fixed itself while I was jiggling the unit to get access to the USB port.
Again, I don't know, so I don't have any way of either fixing it or predicting when it will break again.
On the gripping hand, of course, commercial devices fail all the time. We have a gobo rotator in the show which has failed in most performances. One of the bubble machines already quit. So for all the jerry-rig aspect of my robot, it has shown to be doing a more complex job in a more reliable way.
And of course all these devices are leveraging existing commercial modules. XBee modules and the Zigbee protocol and the underlying 802.15.4 standard. Servo motors, with gear trains and position sensors already integrated. Arduino, of course; that artist-friendly open-hardware, free software embedded computing solution. And even the lowly chain-store R/C car, building on decades of experience building playable remote-control toys.
As I implied earlier, the costumes are coming apart, set decorations are peeling off, props are breaking...my electronics are nowhere near the least reliable elements in the production. But in a larger perspective, all of these are also hacks. Making costume pieces with unusual and often untried materials, making props with re-purposed materials, and so forth. Theater is always about making things work outside of their usual context, and replacing what can't be afforded (or carried, or whatever) with something else.
Every kingly palace with marble floors and gilded throne is of course cunningly painted plywood. The jewels are resin-cast, the swords pot-metal, the armor is vacuuform plastic reinforced with medical cast material (aka celastic). The rocks are made of expanded polystyrene foam.
What the key lesson is, is that it isn't about the technology; it is about the process. Too much in a production like the one I'm currently mixing depends on guesswork about poorly-understood materials and processes, and risk-taking that these elements will function from night to night.
Theater does not need fewer hackers. What it needs is more engineers.
(It also needs to schedule the time to apply an engineering process. As an area set designer once put it, if whatever improvisation you've done makes it through dress rehearsal, the show will close with it. There is never time or energy or money to come back to something that works -- even if poorly -- and rebuild it. But engineering a solution requires a cycle of development and testing. The only way this really happens in theater is over the length of a season, or a career; what worked once, is refined and improved for the next show down the road. What is lacking, though, is still the analytical approach to finding out the parameters of why it worked when it did and describing the envelop of where it won't work. To do that you need more than simple empiricism.)
I'm in the middle of the run of a very complicated show that was thrown together too fast on too small a budget, by a company that has fallen in to a pattern of doing that. The props and effects, some of the costumes and set dressing, and other basic technical elements are held together with gaff tape and hot glue.
My robot is one of these elements. Although I spent the money and did the research for a robust chassis and control linkage, what we had time for is a $40 R/C car chassis carrying the lightest extruded-foam body I could build on top of it. Even a fiberglas shell would be too heavy for the undersized motor -- and at that, I had to tighten the suspension with a few strategically-placed zip ties.
It has managed to survive running into scenery a few times and being kicked by actors more than once. It only failed to make an entrance once in the run -- due to the antenna on the transmitter being snapped off thus limiting control range to about two feet. That required a last-minute frantic repair with brass tube, epoxy, and a quickly-soldered wire to bridge the gap.
The lights also failed once -- in the same performance the projector douser failed -- and this is exemplar of what I was talking about. There is a real difference between a "hacked" solution that will last for a quick shoot or the first day of a convention, and an engineered solution that will work reliably through a five-week run.
The best thing I can say about the douser is that it was already built. I had to rip off the flag and epoxy up a new one to fit the particular projector, and I had to modify the software to use an external switch instead of an external button, but that took under an hour. The rest of the servo housing and power supply and software and so forth was already built.
If nothing else, this proofed the concept of having ready-to-go solutions. My Duck Remote is the same; the XBee solution was already built and programmed and all it took to get to technical rehearsal was cramming the electronics into a different housing. (On the other hand, I did spend several days making a different software solution so as to simplify the hardware end -- and that only partially worked). But as robust as my devices are, they still fall short of the reliability I desire.
I don't at this point know if the douser fouled, overheated, or if the servo needs to be lubricated, but it failed once and the servo is visibly and audibly sticking at one point in its travel. Probably, if I simply swap out the servo for a new one it will make it to the end of the run without further failures, but that underlines the point. I simply can not trust it to keep working for an indeterminate amount of time. And I lack knowledge on how it failed, thus I can neither change it so it doesn't fail, or even predict mean time till the next failure.
In the case of the robot, I do not know why the lights failed. In establishing communication with the onboard Arduino to interrogate it over serial monitor I reloaded the software. And it started working again and hasn't failed since. So I don't know what happened. The only suspicion is that somehow the software got corrupted. But it is also possible that it doesn't always boot up correctly, or that there is a loose connection that fixed itself while I was jiggling the unit to get access to the USB port.
Again, I don't know, so I don't have any way of either fixing it or predicting when it will break again.
On the gripping hand, of course, commercial devices fail all the time. We have a gobo rotator in the show which has failed in most performances. One of the bubble machines already quit. So for all the jerry-rig aspect of my robot, it has shown to be doing a more complex job in a more reliable way.
And of course all these devices are leveraging existing commercial modules. XBee modules and the Zigbee protocol and the underlying 802.15.4 standard. Servo motors, with gear trains and position sensors already integrated. Arduino, of course; that artist-friendly open-hardware, free software embedded computing solution. And even the lowly chain-store R/C car, building on decades of experience building playable remote-control toys.
As I implied earlier, the costumes are coming apart, set decorations are peeling off, props are breaking...my electronics are nowhere near the least reliable elements in the production. But in a larger perspective, all of these are also hacks. Making costume pieces with unusual and often untried materials, making props with re-purposed materials, and so forth. Theater is always about making things work outside of their usual context, and replacing what can't be afforded (or carried, or whatever) with something else.
Every kingly palace with marble floors and gilded throne is of course cunningly painted plywood. The jewels are resin-cast, the swords pot-metal, the armor is vacuuform plastic reinforced with medical cast material (aka celastic). The rocks are made of expanded polystyrene foam.
What the key lesson is, is that it isn't about the technology; it is about the process. Too much in a production like the one I'm currently mixing depends on guesswork about poorly-understood materials and processes, and risk-taking that these elements will function from night to night.
Theater does not need fewer hackers. What it needs is more engineers.
(It also needs to schedule the time to apply an engineering process. As an area set designer once put it, if whatever improvisation you've done makes it through dress rehearsal, the show will close with it. There is never time or energy or money to come back to something that works -- even if poorly -- and rebuild it. But engineering a solution requires a cycle of development and testing. The only way this really happens in theater is over the length of a season, or a career; what worked once, is refined and improved for the next show down the road. What is lacking, though, is still the analytical approach to finding out the parameters of why it worked when it did and describing the envelop of where it won't work. To do that you need more than simple empiricism.)
Friday, July 20, 2012
Working Incrementally
I'm putting the "kill switch" into the robot's head so we can turn off its lights during the blackout. And that build is making me aware of process again.
The way to get in trouble is to try to build an entire project at one go. Then when you have problems, you don't know which component of the system they are in, or if they are synergistic between components. And you also risk having a lot of work invested in soldering up fifteen identical circuits the wrong way, or putting the case together only to have magic smoke come out when you first put on the power.
You will always have failures. You will try ideas that don't work. You will run into both errors and unexpected issues that require re-think and re-building. The trick, though, is to have as many of these failures possible happen in such a place and such a way as to waste the least time and the fewest expensive components on the way.
The two basic techniques are modularization, and frequent testing.
A very important concept is proof-of-concept. The idea is that you want to, whenever possible and throughout the project, jury-rig a way to see if the thing will actually work. When you combine this with modularization, instead of trying to test the entire project, you find some element that can be cleanly separated, and make a test for that.
But this is getting way too vague and theoretical here. Let me be concrete with a recent example.
After a lot of waffling over the best way to rig this kill switch, I settled on some cheap 4-channel RF key fobs I have lying around, and a ULN2803 Darlington Array to do the heavy lifting. I already had an Arduino proto-shield wired up for the 4-channel receiver, so even though it offends me to stick an entire Arduino inside the robot just to switch some headlights on and off, doing it this way leverages a pre-built component that is already tested and working.
I'd never used the ULN2803 before. So what I did, is put one together on protoboard and write a quickly sketch to PWM an Arduino pin connected to it.
This is important...I'm not testing the final version software, I'm not testing multiple channels, and it is no-where near the robot. This is just a proof of concept for the ULN2803 LED driver.
That test passed. I drove over to the theater and picked up the robot (there's some other repairs I need to do before the weekend performance anyhow). Clipped the leads to the headlamp and using a bunch of alligator clips stretching from the work bench to the computer table, confirmed that the actual robot's LEDs also light correctly. This is testing that they are wired the way I thought I wired them.
The next incremental test is, still working outside the robot by splicing into the head wiring where it leaves the neck, was to tap the internal power supply and confirm that the Darlington will switch that correctly. At this point I was feeling confident enough to solder up the first part of the Arduino proto-shield and try powering it from the batteries as well. That didn't work -- until I remembered this was a Deicimilla and I had to switch the jumper manually from USB power to external power. And this is why we do it that way; the only changes in the circuit had been soldering on the power lugs and setting up the basic power busses on the proto-shield. So there were many fewer places to look at, and measure with the VOM, before I found my mistake.
Next step was checking the radio circuit Not much point in doing all this wiring if I can't issue the radio commands! I re-purposed some software from my old RF-to-MIDI device, yanked out most of it, deleted the PWM test routine I'd been using to test LEDs, and stuck in a serial monitor. Instead of trying the entire circuit at one go, I was just using the built-in terminal mode of the Aduino IDE to check if the receiver was responding correctly.
It was. I modified the software to make it a toggle framework, and checked that via the serial print to see that the right values were being thrown up. Then I took that center value and built a basic switch/case routine to interpret it.
Notice this isn't even touching the robot at this point. It is just checking to see if the RF receiver is detecting a signal. The mistake would be in soldering down the ULN2803, splicing it into the robot's power supply, writing the eye blink routines, then trying to diagnose the RF link within that environment. The problem might be RF, or it might be a cold solder joint, or an error in the PWM routines.
With the radio working, I strung some long test cables again. Note I'm still splicing into the head directly at this point; I haven't touched the ribbon cable that leads into the robot's trunk. The LEDs all worked, as did my program. I was running out of time, though, otherwise this would have been the time (with the Arduino outside of the robot and easy to access) to write more complex eye routines. After all, I'm using three PWM outputs, one for each color, so all sorts of animations are possible.
I chose, however, to splice both eyes together at the neck. I could always break them out later, but I didn't have time to run them both down the ribbon cable -- even though I had up to eight switching channels available on the ULN2803.
I tested each eye splice before heat-shrinking the connection. Pulled the ribbon out of the base of the robot and soldered it in, and tested with the Arduino still outside. The pink circuit was loose, and I pulled the shield off and re-touched the solder joints. Then, and only then, did I stick the Arduino inside the torso.
Divide and conquer. It's the way engineers work. There's plenty of time to discover all those wonderful synergistic problems when you've got the bugs hammered out of all the individual parts.
I did waste the time this week building "square candies that look round." The final form factor is small candies each sitting on top of an individual micro-servo. It was important to me to have them each behave as individuals, thus there are a lot of arrays in the program tracking individual behavior.
I had some problems in the software. I had to comment out the parts that incremented the servo position over time because that wasn't working. So the prototype version that -- was robust enough to take to the theater and put in the lobby for a test -- had the servos running at full speed. (And only had one candy -- the rest was just duplication and I saw no need to have the other candies for this test).
Turns out the program cycles slowly enough that there was no need to increment servo position fractionally -- which was the way the BlinkM software I'd re-purposed was doing it. So I wrote a new increment/decrement subroutine from scratch that adds an integer (generally between 6 and 12) to the servo position array before updating the PWM with a myServo.write. It is an annoying bit of cludge code and I'm sure there's a better way to do it, but it seems to work now.
Each candy has one of four states (with a status flag) as well as two incremented counters called "Bored" and "Excited" (Well, since these are array variables, actually "cubeBored[select]" but anyhow! The states are rest, searching, and two traverse states.
All of the behavior is triggered by a single Sharp IR proximity detector which was in my parts box. When something gets close enough to the Sharp, the candies go from "rest" to "move to face" -- with a slight randomness to how fast they move so, with luck, when I have all four candies installed they will appear to be moving as individuals.
The call to the Sharp is actually a subroutine, because there is a little noise in those detectors and I am leaving room to write a simple "debounce" snippet there.
When the candies arrive at full face they begin a random panning movement, which lasts until a random number is smaller than the incremented "bored now" counter. Then they enter a "move back to rest" state and we reset.
What I haven't done is built or bought a nice candy box to display them on. But it turns out there is very little room in the lobby and unless I can really impress someone with them they aren't going to get displayed. Ah, well. It was worth it to get more experience in servo subroutine programming.
The way to get in trouble is to try to build an entire project at one go. Then when you have problems, you don't know which component of the system they are in, or if they are synergistic between components. And you also risk having a lot of work invested in soldering up fifteen identical circuits the wrong way, or putting the case together only to have magic smoke come out when you first put on the power.
You will always have failures. You will try ideas that don't work. You will run into both errors and unexpected issues that require re-think and re-building. The trick, though, is to have as many of these failures possible happen in such a place and such a way as to waste the least time and the fewest expensive components on the way.
The two basic techniques are modularization, and frequent testing.
A very important concept is proof-of-concept. The idea is that you want to, whenever possible and throughout the project, jury-rig a way to see if the thing will actually work. When you combine this with modularization, instead of trying to test the entire project, you find some element that can be cleanly separated, and make a test for that.
But this is getting way too vague and theoretical here. Let me be concrete with a recent example.
After a lot of waffling over the best way to rig this kill switch, I settled on some cheap 4-channel RF key fobs I have lying around, and a ULN2803 Darlington Array to do the heavy lifting. I already had an Arduino proto-shield wired up for the 4-channel receiver, so even though it offends me to stick an entire Arduino inside the robot just to switch some headlights on and off, doing it this way leverages a pre-built component that is already tested and working.
I'd never used the ULN2803 before. So what I did, is put one together on protoboard and write a quickly sketch to PWM an Arduino pin connected to it.
This is important...I'm not testing the final version software, I'm not testing multiple channels, and it is no-where near the robot. This is just a proof of concept for the ULN2803 LED driver.
That test passed. I drove over to the theater and picked up the robot (there's some other repairs I need to do before the weekend performance anyhow). Clipped the leads to the headlamp and using a bunch of alligator clips stretching from the work bench to the computer table, confirmed that the actual robot's LEDs also light correctly. This is testing that they are wired the way I thought I wired them.
The next incremental test is, still working outside the robot by splicing into the head wiring where it leaves the neck, was to tap the internal power supply and confirm that the Darlington will switch that correctly. At this point I was feeling confident enough to solder up the first part of the Arduino proto-shield and try powering it from the batteries as well. That didn't work -- until I remembered this was a Deicimilla and I had to switch the jumper manually from USB power to external power. And this is why we do it that way; the only changes in the circuit had been soldering on the power lugs and setting up the basic power busses on the proto-shield. So there were many fewer places to look at, and measure with the VOM, before I found my mistake.
Next step was checking the radio circuit Not much point in doing all this wiring if I can't issue the radio commands! I re-purposed some software from my old RF-to-MIDI device, yanked out most of it, deleted the PWM test routine I'd been using to test LEDs, and stuck in a serial monitor. Instead of trying the entire circuit at one go, I was just using the built-in terminal mode of the Aduino IDE to check if the receiver was responding correctly.
It was. I modified the software to make it a toggle framework, and checked that via the serial print to see that the right values were being thrown up. Then I took that center value and built a basic switch/case routine to interpret it.
Notice this isn't even touching the robot at this point. It is just checking to see if the RF receiver is detecting a signal. The mistake would be in soldering down the ULN2803, splicing it into the robot's power supply, writing the eye blink routines, then trying to diagnose the RF link within that environment. The problem might be RF, or it might be a cold solder joint, or an error in the PWM routines.
With the radio working, I strung some long test cables again. Note I'm still splicing into the head directly at this point; I haven't touched the ribbon cable that leads into the robot's trunk. The LEDs all worked, as did my program. I was running out of time, though, otherwise this would have been the time (with the Arduino outside of the robot and easy to access) to write more complex eye routines. After all, I'm using three PWM outputs, one for each color, so all sorts of animations are possible.
I chose, however, to splice both eyes together at the neck. I could always break them out later, but I didn't have time to run them both down the ribbon cable -- even though I had up to eight switching channels available on the ULN2803.
I tested each eye splice before heat-shrinking the connection. Pulled the ribbon out of the base of the robot and soldered it in, and tested with the Arduino still outside. The pink circuit was loose, and I pulled the shield off and re-touched the solder joints. Then, and only then, did I stick the Arduino inside the torso.
Divide and conquer. It's the way engineers work. There's plenty of time to discover all those wonderful synergistic problems when you've got the bugs hammered out of all the individual parts.
I did waste the time this week building "square candies that look round." The final form factor is small candies each sitting on top of an individual micro-servo. It was important to me to have them each behave as individuals, thus there are a lot of arrays in the program tracking individual behavior.
I had some problems in the software. I had to comment out the parts that incremented the servo position over time because that wasn't working. So the prototype version that -- was robust enough to take to the theater and put in the lobby for a test -- had the servos running at full speed. (And only had one candy -- the rest was just duplication and I saw no need to have the other candies for this test).
Turns out the program cycles slowly enough that there was no need to increment servo position fractionally -- which was the way the BlinkM software I'd re-purposed was doing it. So I wrote a new increment/decrement subroutine from scratch that adds an integer (generally between 6 and 12) to the servo position array before updating the PWM with a myServo.write. It is an annoying bit of cludge code and I'm sure there's a better way to do it, but it seems to work now.
Each candy has one of four states (with a status flag) as well as two incremented counters called "Bored" and "Excited" (Well, since these are array variables, actually "cubeBored[select]" but anyhow! The states are rest, searching, and two traverse states.
All of the behavior is triggered by a single Sharp IR proximity detector which was in my parts box. When something gets close enough to the Sharp, the candies go from "rest" to "move to face" -- with a slight randomness to how fast they move so, with luck, when I have all four candies installed they will appear to be moving as individuals.
The call to the Sharp is actually a subroutine, because there is a little noise in those detectors and I am leaving room to write a simple "debounce" snippet there.
When the candies arrive at full face they begin a random panning movement, which lasts until a random number is smaller than the incremented "bored now" counter. Then they enter a "move back to rest" state and we reset.
What I haven't done is built or bought a nice candy box to display them on. But it turns out there is very little room in the lobby and unless I can really impress someone with them they aren't going to get displayed. Ah, well. It was worth it to get more experience in servo subroutine programming.
Wednesday, July 18, 2012
The Man Who Mistook His Mic for a Hat Stand
I'm using a mic for a hatstand. Sm58 in a straight round-base stand. Works pretty well. The PA just came back from a gig and there's no room in the closet to store all of it.
Doesn't help that the robot worktable is still set up and I have carving foam, paint, plastic bits, and of course electronics strewn around. I really need to move some projects from "in progress" to "done and can be put on a shelf out of the way."
Lacking the time and space for that, I'm putting more things in boxes today. So at least the half-done project has all the essential parts collected in one place. One such (large) box is filled with vacuumform Lewis Gun. Which spills over into another box, which also contains grenade spoons and other parts to build some prop smoke grenades.
On the table is the camera head we didn't use for the robot. I'm half-tempted to keep it. And instead of pulling out the mini servo, program an AVR to do random moves.
Actually, if I was doing random robotics that had any connection to the show I just opened, I'd be putting proximity/movement sensors in a basic servo to build a couple of what the BEAM community calls Head-type Squirmers. And then I'd put a big foam-core cube on the servo and paint it up like a candy. Which is to say; I'd be building some "Square candies that look 'round."
It is awfully tempting. I have all the hardware here. If I skipped active tracking and just had pre-programmed behavior linked to a simple IR proximity sensor.....
Argh! My goal for today was actually to log hours in element repair. I'm getting paid for that, at least. And I need some spares for the weekend. The other goal before the next weekend of the show begins is upgrades to the robot. And with my Vex apparently dead I was going to leverage the two new XBee nodes I just picked up.
The second test was going to be -- hopefully still will be -- trying out direct mode for control of a servo. I've seen it done as a demonstration at Makers Faire. Apparently the PCM output of the analog pins is close enough to drive an un-modified servo. And all you need for the transmitter side is a potentiometer. Of course, adding a couple trim resistors/pots would be smart.
I still prefer -- especially for something as fragile as my robot -- to set the servo limits in software. And this also means that if you are transmitting power on/power off signals to the lights and wireless camera (which draws power from the 7.2 nicads in the chassis), you can transmit a single "toggle" command instead of having to depend on continuous transmission of either the "on" command or a "kill" function. Plus, the eyes were wired with six high power LED's each in 3 colors, and with a micro somewhere in the signal chain you could command these to color mix or to chase.
All I really hope to get done by this weekend's shows is adding a camera and light kill switch, though. Which I can probably do with no more than my existing XBee nodes and the "Really" (aka "Relay") board I purchased from one of the Kowloon-based electronics suppliers via eBay.
The coolest way to do it is, of course, with full serial. Although I/O line passing on the XBee nodes is near-transparent and simple to set up, establishing a serial link gives a more robust link and a practically unlimited command set. Plus, of course, having client-side intelligence means you can program the thing to operate autonomously between commands. On the server side, set up an Arduino/AVR and wire that to buttons and switches. Or write an application in Processing and put in virtual buttons and switches/monitor keyboard and mouse.
This is more of the sort of thing I taught myself about "naked" AVRs for, so I didn't have to waste a whole $28 Arduino on an embedded application. But I'm not fluent and practiced enough with them so I can quickly write some serial data routines for the built-in UART and upload it to a 45-cent chip. It might only be basic C, but straight C is worlds away from Wiring (aka Processing/Arduino) wrapped inside that handy IDE. When you are programming for micros, you lose much of that layer of abstraction between you and the metal -- "Serial.print ("hello, world");" begins to look a lot more like "PORTB |= (0 << 2);"
Well. Maybe if I can get stuff boxed and the table cleared and XBees show they have the right firmware and the servo responds...I might hook up the IR proximity detector I have and see how fast I could make a "square candy that looks 'round."
Doesn't help that the robot worktable is still set up and I have carving foam, paint, plastic bits, and of course electronics strewn around. I really need to move some projects from "in progress" to "done and can be put on a shelf out of the way."
Lacking the time and space for that, I'm putting more things in boxes today. So at least the half-done project has all the essential parts collected in one place. One such (large) box is filled with vacuumform Lewis Gun. Which spills over into another box, which also contains grenade spoons and other parts to build some prop smoke grenades.
On the table is the camera head we didn't use for the robot. I'm half-tempted to keep it. And instead of pulling out the mini servo, program an AVR to do random moves.
Actually, if I was doing random robotics that had any connection to the show I just opened, I'd be putting proximity/movement sensors in a basic servo to build a couple of what the BEAM community calls Head-type Squirmers. And then I'd put a big foam-core cube on the servo and paint it up like a candy. Which is to say; I'd be building some "Square candies that look 'round."
It is awfully tempting. I have all the hardware here. If I skipped active tracking and just had pre-programmed behavior linked to a simple IR proximity sensor.....
Argh! My goal for today was actually to log hours in element repair. I'm getting paid for that, at least. And I need some spares for the weekend. The other goal before the next weekend of the show begins is upgrades to the robot. And with my Vex apparently dead I was going to leverage the two new XBee nodes I just picked up.
The second test was going to be -- hopefully still will be -- trying out direct mode for control of a servo. I've seen it done as a demonstration at Makers Faire. Apparently the PCM output of the analog pins is close enough to drive an un-modified servo. And all you need for the transmitter side is a potentiometer. Of course, adding a couple trim resistors/pots would be smart.
I still prefer -- especially for something as fragile as my robot -- to set the servo limits in software. And this also means that if you are transmitting power on/power off signals to the lights and wireless camera (which draws power from the 7.2 nicads in the chassis), you can transmit a single "toggle" command instead of having to depend on continuous transmission of either the "on" command or a "kill" function. Plus, the eyes were wired with six high power LED's each in 3 colors, and with a micro somewhere in the signal chain you could command these to color mix or to chase.
All I really hope to get done by this weekend's shows is adding a camera and light kill switch, though. Which I can probably do with no more than my existing XBee nodes and the "Really" (aka "Relay") board I purchased from one of the Kowloon-based electronics suppliers via eBay.
The coolest way to do it is, of course, with full serial. Although I/O line passing on the XBee nodes is near-transparent and simple to set up, establishing a serial link gives a more robust link and a practically unlimited command set. Plus, of course, having client-side intelligence means you can program the thing to operate autonomously between commands. On the server side, set up an Arduino/AVR and wire that to buttons and switches. Or write an application in Processing and put in virtual buttons and switches/monitor keyboard and mouse.
This is more of the sort of thing I taught myself about "naked" AVRs for, so I didn't have to waste a whole $28 Arduino on an embedded application. But I'm not fluent and practiced enough with them so I can quickly write some serial data routines for the built-in UART and upload it to a 45-cent chip. It might only be basic C, but straight C is worlds away from Wiring (aka Processing/Arduino) wrapped inside that handy IDE. When you are programming for micros, you lose much of that layer of abstraction between you and the metal -- "Serial.print ("hello, world");" begins to look a lot more like "PORTB |= (0 << 2);"
Well. Maybe if I can get stuff boxed and the table cleared and XBees show they have the right firmware and the servo responds...I might hook up the IR proximity detector I have and see how fast I could make a "square candy that looks 'round."
Tuesday, July 17, 2012
Vexing
Ah, the joy of embedded computing -- when you are debugging you can never be sure if it is really a software error or if it is a hardware problem instead.
The two worlds blend into each other, too. Often the simplest "test circuit" is dashing off a quick program that will interrogate the inputs or run through a preset motion on a servo. So you find yourself working at both ends at once; with breadboard and alligator clips and double-stick tape on the desktop, and a couple feet away, similar commented-out and patched together boilerplate code running on a laptop.
I have three gadgets in performance right now. My XBee, in a new housing and with a new software interpreter, is in a children's show. Amusingly enough, the Easy Button it used to be in is in another show, sans electronics. My MIDI-controlled projector douser is in its first show, and is a visible lighting effect. The MIDI circuit is being bypassed, though, making it basically the world's most complicated solenoid. Good thing I included a quarter-inch jack for direct control (although I had to tweak the software to use it for this show). And the R/C robot with wireless camera is also in a show.
And it needs work. The transmitter is barely strong enough. The wheels and motors are so weak I'm scared it is going to stall out in the middle of a performance. And I could really use remote switches for the lights and camera. The latter, at least, I could hack up pretty fast with one of the new XBee nodes I just ordered. Pretty much a button and a Darlington transmitter and that old friendly pin echo mode would do it.
Which I may need to. I spent today trying to get the Vex transmitter I purchased to communicate with an Arduino. At this point about all I am sure of is that the receiver is getting power, and seems to put out at least one pulse distinct enough to trigger the Arduino's interrupt. Other than that, no joy.
For the first time in the two years since I got rid of it, I could use that old oscilloscope. I can't confirm that the receiver is actually working correctly, or seeing the transmitter.
The alternative (outside of ordering a second one and seeing if that one works better) is to delve a lot deeper into the Arduino PCM capabilities. Basically, write a series of sketches to confirm that the receiver is actually sending something like protocol. And then either go back through the code I've been trying to use, or write new code from scratch.
And I still don't know if the other end -- interfacing with the Futaba ESC and the steering servo, or even interfacing with the LED controller darlingtons -- will be as easy as hoped or will bring yet another software/hardware debug.
There isn't THAT much time this week. I have to re-do some sound cues and repair a whole bunch of microphone elements, plus I should really make some new mic belts. And I have a multi-track mixdown to do. And, after all that is done, it would be very, very nice to do some work I'll actually get PAID for!
Looks like it was a hardware error after all -- and not on my side. I hooked up a speaker and I don't seem to be getting regular tone bursts. I hooked it up to the Vex CPU and the Receiver Status light doesn't light. Still not as good as hooking up an oscilloscope, but I'm willing at this point to drop the $40 on a replacement receiver. Except that I am flat broke again (the current design is paying in installments) so I can't do that until the end of the month. By which point the show will be almost over.
The two worlds blend into each other, too. Often the simplest "test circuit" is dashing off a quick program that will interrogate the inputs or run through a preset motion on a servo. So you find yourself working at both ends at once; with breadboard and alligator clips and double-stick tape on the desktop, and a couple feet away, similar commented-out and patched together boilerplate code running on a laptop.
I have three gadgets in performance right now. My XBee, in a new housing and with a new software interpreter, is in a children's show. Amusingly enough, the Easy Button it used to be in is in another show, sans electronics. My MIDI-controlled projector douser is in its first show, and is a visible lighting effect. The MIDI circuit is being bypassed, though, making it basically the world's most complicated solenoid. Good thing I included a quarter-inch jack for direct control (although I had to tweak the software to use it for this show). And the R/C robot with wireless camera is also in a show.
And it needs work. The transmitter is barely strong enough. The wheels and motors are so weak I'm scared it is going to stall out in the middle of a performance. And I could really use remote switches for the lights and camera. The latter, at least, I could hack up pretty fast with one of the new XBee nodes I just ordered. Pretty much a button and a Darlington transmitter and that old friendly pin echo mode would do it.
Which I may need to. I spent today trying to get the Vex transmitter I purchased to communicate with an Arduino. At this point about all I am sure of is that the receiver is getting power, and seems to put out at least one pulse distinct enough to trigger the Arduino's interrupt. Other than that, no joy.
For the first time in the two years since I got rid of it, I could use that old oscilloscope. I can't confirm that the receiver is actually working correctly, or seeing the transmitter.
The alternative (outside of ordering a second one and seeing if that one works better) is to delve a lot deeper into the Arduino PCM capabilities. Basically, write a series of sketches to confirm that the receiver is actually sending something like protocol. And then either go back through the code I've been trying to use, or write new code from scratch.
And I still don't know if the other end -- interfacing with the Futaba ESC and the steering servo, or even interfacing with the LED controller darlingtons -- will be as easy as hoped or will bring yet another software/hardware debug.
There isn't THAT much time this week. I have to re-do some sound cues and repair a whole bunch of microphone elements, plus I should really make some new mic belts. And I have a multi-track mixdown to do. And, after all that is done, it would be very, very nice to do some work I'll actually get PAID for!
Looks like it was a hardware error after all -- and not on my side. I hooked up a speaker and I don't seem to be getting regular tone bursts. I hooked it up to the Vex CPU and the Receiver Status light doesn't light. Still not as good as hooking up an oscilloscope, but I'm willing at this point to drop the $40 on a replacement receiver. Except that I am flat broke again (the current design is paying in installments) so I can't do that until the end of the month. By which point the show will be almost over.
Sunday, July 15, 2012
Me, Robot
So the robot is built, and made it through opening night. Now if it can just make it through opening weekend I'll be able to take it home, reinforce and improve it.
Everyone loves the design, and it is very much design by circumstance.
Original concept was a camera bot. Given the wheelbase available in our price range, I was pretty much stuck with a robot not much more than a foot long, which meant it would have a small camera stuck on top of a long neck. I built a mock-up turret camera with a servo for the turret and a fairly poor pan-tilt mechanism with another pair of servos.
Then there was a request for a second robot, a "girl" robot, which would chase the camera bot on stage and flirt with it. Obviously someone had been watching "Wall-E" too much.
We were getting too close to tech for comfort and I wasn't getting assurances that the costs were going to be reimbursed. So I picked up a forty dollar remote control car from a mass retailer. And since it seemed a simpler proof-of-concept, I started building the "girl" robot on top of it -- mostly to see if a $40 set of plastic wheels would support the weight of a prop large enough to "read" on stage, or whether I had to gamble with a more expensive rolling chassis.
Framed out a dome-shaped body with foam-core and that ran. Began adding extruded polystyrene chunks and carved up a smooth dome from that. Still ran, although it was sagging a little on the wheels at this point. The foam was delicate enough I altered the design to include a bumper.
So I obviously needed the performance of the Tamiya rolling chassis I had my eyes on. Ordered that, and a Vex controller, and -- when I realized the chassis shipped without one -- a Futaba ESC, and a new medium servo, and a few more parts besides. The draft for the "guy" robot was now a boxy body (foam-core) with an industrial look, and the turret camera on a smaller version of those hydraulic pillars used on ENG vehicles. I bought some bumper material, sheet styrene, and a vent cover as part of the intended dressing.
The request came that one of the robots really needed a binocular head (more "Wall-E"). So that became folded into the girl robot; the guy would have the turret camera, the girl would have a smoother head that didn't look so camera like.
Made a progress report at a meeting and they cut the guy robot.
So now the girl was the primary camera carry, but the binocular head was half-built by this point. And we no longer needed to distinguish her so much so she lost the pink bow and eyelashes and stayed with the smooth "EVA" white paint job.
And the only thing that lingers from the expensive exploration is that her choreography keeps getting expanded. She has a long "flirt" scene with an actor, and her own bow at curtain call. And the cheap plastic wheels and remote control are barely making it. So over this next week I intend to upgrade to the Tamiya, and maybe stick some servos in her as well as radio control of her light-up eyes.
Made it to opening night with the sound effects as well. A lot of things are not right and some are still not ready, and I'm going to be doing some serious reflection about how I need to change my design process. This is not the first show in which sound has had to play pick-up, giving me too little time left to do the work.
My paradigm has been that sound fits into the existing environment. That as a mixer, and as an effects designer, I work with (or around) the blend of voices and band, and the timing and, well, mood of action, choreography, and set movement. If I have a cue about a drawbridge lowering, I will wait on the final version until we find out how long the on-set drawbridge takes. If I have a piece of underscore music, I wait until the actor has found the way he wishes to do that scene and has settled on his timing.
This means, unfortunately, that when choreography is late, when set is late, when cuts and changes are made late, I need to re-think the sounds. In some cases, when the show isn't blocked yet and the actors aren't even off book, I can't even start building certain cues.
James Horner talks about having a similar experience on scoring "Aliens." He arrived in England six weeks before opening expecting to find a locked picture. He had the shape, the ideas, motifs in his head, basic arrangements, etc. But he couldn't start composing the actual running score until he had an actual film to time it to. And he needed not just enough time for him to write, but for the copyists to do the breakouts and get the music on the stands, the LSO rehearse and be recorded, and of course the poor dubbing mixers (the editing team always suffers from this effect) to get it cut into the picture.
What I need, unfortunately, is a way to have cues that aren't right, but will do -- and have them built early on. I'm not sure what to do about new cues that get added at the last minute. The particular show I just opened, there were two DANCE pieces that I didn't hear about until we were two weeks out. I spent the bulk of my available build time doing this, as they prioritized WELL over phone rings and toilet flushes.
I think that part of the problem is better addressed by making sure the clients better understand how long these things take. That making two minutes of dance music is not something you toss off in a single evening. But because of these prioritized (and time-consuming) cues, and another show that ran over their allotment considerably, I had even less time than the two brief weeks I was given. And scenery changes and even basic scene blocking was still happening two days before opening -- when I had no more schedule-able time to build sound cues.
Another thing that will help is breaking out the effects design/reinforcement design again. To go back into partnership so someone else is worrying about effects at the time in which band monitors and wireless mics become the necessary priority.
But even this, even all this, doesn't fix the basic problem. And that is how to pre-load kinds of things that shouldn't be pre-loaded.
I have a lot more thinking to do about this show and what I can learn from it to do different, but one other thing stands out from the experience, and it is exactly the wrong lesson. And that is that the artistic shape -- the mood of the show, the pallet, the kinds of sounds, the very approach -- wasn't clear to me until a week out. If for some reason I had sat down with the script two months out and created all the cues then, they would have been wrong and I would have had to cut most of them.
The remaining question there is if this is still a net gain for the show. Having even one cue that is right is one less cue to be working on during the crunch. But that is based on an assumption of, basically, designing the entire show "on spec." And on-spec work is the last thing you want to do if you are actually hoping to make a living as a designer. Facing not just the chance but the probability that most of the hours you put in will be wasted is NOT a good way to have rent in hand at the end of the month!
Everyone loves the design, and it is very much design by circumstance.
Original concept was a camera bot. Given the wheelbase available in our price range, I was pretty much stuck with a robot not much more than a foot long, which meant it would have a small camera stuck on top of a long neck. I built a mock-up turret camera with a servo for the turret and a fairly poor pan-tilt mechanism with another pair of servos.
Then there was a request for a second robot, a "girl" robot, which would chase the camera bot on stage and flirt with it. Obviously someone had been watching "Wall-E" too much.
We were getting too close to tech for comfort and I wasn't getting assurances that the costs were going to be reimbursed. So I picked up a forty dollar remote control car from a mass retailer. And since it seemed a simpler proof-of-concept, I started building the "girl" robot on top of it -- mostly to see if a $40 set of plastic wheels would support the weight of a prop large enough to "read" on stage, or whether I had to gamble with a more expensive rolling chassis.
Framed out a dome-shaped body with foam-core and that ran. Began adding extruded polystyrene chunks and carved up a smooth dome from that. Still ran, although it was sagging a little on the wheels at this point. The foam was delicate enough I altered the design to include a bumper.
So I obviously needed the performance of the Tamiya rolling chassis I had my eyes on. Ordered that, and a Vex controller, and -- when I realized the chassis shipped without one -- a Futaba ESC, and a new medium servo, and a few more parts besides. The draft for the "guy" robot was now a boxy body (foam-core) with an industrial look, and the turret camera on a smaller version of those hydraulic pillars used on ENG vehicles. I bought some bumper material, sheet styrene, and a vent cover as part of the intended dressing.
The request came that one of the robots really needed a binocular head (more "Wall-E"). So that became folded into the girl robot; the guy would have the turret camera, the girl would have a smoother head that didn't look so camera like.
Made a progress report at a meeting and they cut the guy robot.
So now the girl was the primary camera carry, but the binocular head was half-built by this point. And we no longer needed to distinguish her so much so she lost the pink bow and eyelashes and stayed with the smooth "EVA" white paint job.
And the only thing that lingers from the expensive exploration is that her choreography keeps getting expanded. She has a long "flirt" scene with an actor, and her own bow at curtain call. And the cheap plastic wheels and remote control are barely making it. So over this next week I intend to upgrade to the Tamiya, and maybe stick some servos in her as well as radio control of her light-up eyes.
Made it to opening night with the sound effects as well. A lot of things are not right and some are still not ready, and I'm going to be doing some serious reflection about how I need to change my design process. This is not the first show in which sound has had to play pick-up, giving me too little time left to do the work.
My paradigm has been that sound fits into the existing environment. That as a mixer, and as an effects designer, I work with (or around) the blend of voices and band, and the timing and, well, mood of action, choreography, and set movement. If I have a cue about a drawbridge lowering, I will wait on the final version until we find out how long the on-set drawbridge takes. If I have a piece of underscore music, I wait until the actor has found the way he wishes to do that scene and has settled on his timing.
This means, unfortunately, that when choreography is late, when set is late, when cuts and changes are made late, I need to re-think the sounds. In some cases, when the show isn't blocked yet and the actors aren't even off book, I can't even start building certain cues.
James Horner talks about having a similar experience on scoring "Aliens." He arrived in England six weeks before opening expecting to find a locked picture. He had the shape, the ideas, motifs in his head, basic arrangements, etc. But he couldn't start composing the actual running score until he had an actual film to time it to. And he needed not just enough time for him to write, but for the copyists to do the breakouts and get the music on the stands, the LSO rehearse and be recorded, and of course the poor dubbing mixers (the editing team always suffers from this effect) to get it cut into the picture.
What I need, unfortunately, is a way to have cues that aren't right, but will do -- and have them built early on. I'm not sure what to do about new cues that get added at the last minute. The particular show I just opened, there were two DANCE pieces that I didn't hear about until we were two weeks out. I spent the bulk of my available build time doing this, as they prioritized WELL over phone rings and toilet flushes.
I think that part of the problem is better addressed by making sure the clients better understand how long these things take. That making two minutes of dance music is not something you toss off in a single evening. But because of these prioritized (and time-consuming) cues, and another show that ran over their allotment considerably, I had even less time than the two brief weeks I was given. And scenery changes and even basic scene blocking was still happening two days before opening -- when I had no more schedule-able time to build sound cues.
Another thing that will help is breaking out the effects design/reinforcement design again. To go back into partnership so someone else is worrying about effects at the time in which band monitors and wireless mics become the necessary priority.
But even this, even all this, doesn't fix the basic problem. And that is how to pre-load kinds of things that shouldn't be pre-loaded.
I have a lot more thinking to do about this show and what I can learn from it to do different, but one other thing stands out from the experience, and it is exactly the wrong lesson. And that is that the artistic shape -- the mood of the show, the pallet, the kinds of sounds, the very approach -- wasn't clear to me until a week out. If for some reason I had sat down with the script two months out and created all the cues then, they would have been wrong and I would have had to cut most of them.
The remaining question there is if this is still a net gain for the show. Having even one cue that is right is one less cue to be working on during the crunch. But that is based on an assumption of, basically, designing the entire show "on spec." And on-spec work is the last thing you want to do if you are actually hoping to make a living as a designer. Facing not just the chance but the probability that most of the hours you put in will be wasted is NOT a good way to have rent in hand at the end of the month!
Thursday, July 5, 2012
Duck!
My third "Duck" show in a row. So I brought in my Wireless Easy Button, and they wanted it in the show. It was fairly easy to cram the XBee into the prop remote (soldering to the board to pick up switch traces was less easy, and the 1-watt Luxeon I put in the front is not nearly as bright as a hoped -- that's the flaw in running it on barely more volts than the voltage drop!)
However. Instead of using one of my computers, they are using the Stage Manager's computer for sound cue playback. And that means they are using the free version of QLab -- without the money to pay for the MIDI license.
Fortunately, although the java classes for MIDI are basically screwed on Mac OS and will be for the foreseeable future (heck...it isn't as if java itself has a future on the Mac, with all the effort Apple has been making to have everyone dump cross-platform development platforms and write exclusively on Cocoa)...anyhow, although MIDI is a pain, basic sound file playback isn't bad. So over the Fourth of July, I sat home and wrote and compiled a stand-alone, double-clickable ap that looks for a serial-over-USB, and when it detects activity at the serial port it plays the sound sample out the default Core Audio device.
It took me a chunk of today to figure out the other trick; when transporting it to a different OS/platform, bring the entire sketchbook folder and open Processing and re-compile from inside that folder. Between Processing and the Mac OS, I was getting a lot of cryptic error messages instead of a simple "library not found" or "wrong library" and this solved the issue because saving out the entire project this way compiles a copy of ALL of the libraries in your #includes.
And here it is on my not-quite-as-ancient G5:
The rubber duck is providing a housing for a XBee Explorer USB, which links the receiving XBee node to the laptop's USB port. The app window is displaying the selected port and flashes green when the XBee receives a signal. The hacked remote is under the duck.
The app is total cludge code, of course. Among other things, I'm not even trying to parse the serial message from the XBee (my mistake on a previous implementation). Instead I'm simply detecting if ANY serial message is present, and then as soon as I've confirmed that, I wipe the serial buffer!
(Should have thought of that trick long ago.)
And that's all I have time for. I have some six minutes of music to compose before the weekend, and I need to learn how to run my new shareware software synthesizer to do it!
However. Instead of using one of my computers, they are using the Stage Manager's computer for sound cue playback. And that means they are using the free version of QLab -- without the money to pay for the MIDI license.
Fortunately, although the java classes for MIDI are basically screwed on Mac OS and will be for the foreseeable future (heck...it isn't as if java itself has a future on the Mac, with all the effort Apple has been making to have everyone dump cross-platform development platforms and write exclusively on Cocoa)...anyhow, although MIDI is a pain, basic sound file playback isn't bad. So over the Fourth of July, I sat home and wrote and compiled a stand-alone, double-clickable ap that looks for a serial-over-USB, and when it detects activity at the serial port it plays the sound sample out the default Core Audio device.
It took me a chunk of today to figure out the other trick; when transporting it to a different OS/platform, bring the entire sketchbook folder and open Processing and re-compile from inside that folder. Between Processing and the Mac OS, I was getting a lot of cryptic error messages instead of a simple "library not found" or "wrong library" and this solved the issue because saving out the entire project this way compiles a copy of ALL of the libraries in your #includes.
And here it is on my not-quite-as-ancient G5:
The app is total cludge code, of course. Among other things, I'm not even trying to parse the serial message from the XBee (my mistake on a previous implementation). Instead I'm simply detecting if ANY serial message is present, and then as soon as I've confirmed that, I wipe the serial buffer!
(Should have thought of that trick long ago.)
And that's all I have time for. I have some six minutes of music to compose before the weekend, and I need to learn how to run my new shareware software synthesizer to do it!
Labels:
Java,
Processing,
programming,
sound,
theater,
wireless,
XBee
Subscribe to:
Posts (Atom)