Showing posts with label MIDI. Show all posts
Showing posts with label MIDI. Show all posts

Thursday, January 18, 2018

Instrumentality

Way back when I was writing music using MIDI: what we call now "virtual instruments" or "software synthesis," but at the time was largely in physical devices, often rack-mount ROMplers (a samPLER using Read-Only Memory chips to store the waveform data).

And my thought was that I should really learn to play a few of the instruments I was simulating: so as to get an understanding of the language of that instrument, how it was normally played, what is idiomatic and what is difficult, and thus produce more realistic MIDI compositions.

Some years later I was mixing bands and pit orchestras and that thought came around again in slightly different clothes: I should learn to play an instrument (other than keyboard) so I could understand and gain sympathy for what the musicians I was working with are going through, and learn how to better support them.

Forward several more years, and what changed is that I now had funds. A steady day job -- which also meant my free time was presenting in a form that made possible daily practice sessions with the instruments I could now afford. I wasn't mixing shows any more, and my own (MIDI-ish) music had moved in different directions, so what was left was mostly that largely inchoate desire to actually try a brass instrument, or even a violin, and find out if I could actually play it.

In the back of my mind was a new thought; that I could continue to compose with virtual instruments, but I could "pad out" the tracks with live recordings (the same way I was already using found sounds and samples and other sonic raw material.)

After about eighteen months of learning on what has turned into a growing collection of musical instruments I started on the first composition that would be designed from the start around recorded parts. And what happened? On closer examination that particular piece had the peculiarity that I could -- technically -- perform every part on it. There would be no MIDI, no synthesis, no found sounds. All would be actual musical performance.

The most surprising part is that it doesn't sound that bad.




(The other funny hilarious thing is that almost none of my new collection of instruments figured into it. Mostly, it became possible because back in my budget days the instruments I could afford were recorders.)


Sunday, December 17, 2017

A Little Night (Vision) Music

The Khajiit piece was a really poor choice for my first experiment in recording.

My vision has been to largely create with virtual instruments (aka MIDI) with recordings of real instruments folded in where possible. That was one of my goals in learning violin and, yes, penny whistle. The best thing I can say about the Khajiit piece is, due to the peculiarities of the arrangement, I can perform all the parts on physical instruments.


This is also one of the downsides. It depends on my performance, in every instrument above. There's no place I can use a keyboard or a drum track to give more of a gloss.

I would have done better with a jazz piece. See, this is almost a tone poem, using instruments coloristically. And that makes the parts very hard to perform. The thing that a more standard setting gives you is a well-defined rhythm and well-defined harmonic structure. Having that bass and drums behinds you pulls you along in the right meter and on to the right pitches. The Khajiit parts, instead, sort of float out there -- as witnessed by the fact I found it as easy if not easier to record some of them in isolation to nothing but a metronome track.


Most of this has been using the extremely basic Behringer U-control USB interface that came with my MIDI keyboard, with an AudioBuddy pre-amp and phantom power unit in front of it. I tried recording a couple of the violin tracks at work; I brought a Zoom recorder with me to the shop, and dialed up a metronome application on the iPhone.

Everything gets assembled in Reaper:


That's barely a quarter of the tracks there. Another disadvantage to this piece; it is all about changing tone colors, meaning I'm basically playing in little more than a couple bars at a time. There's not really a chance to get into the flow of the piece and the performance I'm trying to contribute to the mix. Just do my best to stay within the tempo and hit the right notes.

At the moment I'm at ukulele (my $40 Rogue) standing in for lute; plucked, slow-strummed, and fast-strummed with ras accents. There may be more. Violin (my student-model Pfetchner) in sustained lines and in some improvised harmonics for a spooky effect. Bodhran (my Pakistani-made 18" tune-able) with tipper, rolls, fingertip, scratching and brushing, and stick accents. The wooden soprano recorder I've had since childhood and a Yamaha ABS alto. Crumhorn (a sadly out of pitch Susato in brown ABS), and shawm (a bombarde from Lark in the Morning at Fisherman's Wharf, sporting a badly fit oboe reed these days).

So, back to working method. I imported the original version of the song into Reaper as track one. Then went through my libraries to find virtual instruments (instruments I could play from the MIDI keyboard) with similar sounds to those physical instruments I would be recording.

The opening riff was just a matter of using my ear and matching what was on the original recording. The "theme" was more difficult; as the original was spoke-sung, I had to go into notation mode, type in the lyrics, and line those up in time with the original recording. Then I could work out a melody track that sort-of echoed the speech patterns of the original, and stayed somewhere in range of the chord structure.

Since there are sections of multi-part harmony I worked that out here, too. Actually, I disliked the violin harmony and when I was at the shop recording those parts I took out some music paper and worked out new voice leading there. A not-small advantage of this technique is I can play the mock-up of the part back and learn the line by ear from there (I'm still not much of a sight-reader).



So call it a noble experiment. I'm still nowhere near ready to embark on the big Tomb Raider piece I have in mind. So I'm hunting around now for something jazzy. Something with a more straight-forward rhythm and harmonic progression I can really feel like I'm jamming to when I record in the parts. And, yeah...maybe something with a trumpet part or two.




Sunday, May 22, 2016

The Internet of (Theater) Things

I was at Maker Faire this weekend, and like so many other parts of the inter-tubes it was all IOT, IOT, IOT. And I'm wondering if it is time to rethink the usual objection to using Wifi in a performance context.

The old assumptions were that wifi wasn't reliable enough for performance. But then, we used to assume computers weren't reliable enough. I have seen the pleasant glow of the Blue Screen of Death from a sound booth or two, but I ran into many, many more people over the years who insisted on using tape, cassettes, or (eventually) CD's for effects playback because they didn't trust that a computer wouldn't break on them.

Well, I think most people have moved past that. Computers have become the default for sound playback as well as video playback. Crashes still occur but they tend to get ironed out in Tech; the show-stopper failures I have personally observed have been due to the batteries running dry on a production laptop.

I've seen a slow movement towards acceptance of iPad links to sound boards. I remember when the Yamaha board was iffy at best, but now that Wifi link is considered reliable enough for use in the time-critical environment of Tech and Sound Check on a professional-level show.

(On the other side of the technology-adoption bell curve, I've personally run numerous productions with laptop and software tools due to not having the budget for rack-mount equipment. Sub-mixed drums for one musical on the laptop, and that worked well enough that on a later production I few all of the sound reinforcement through it, using freeware plug-ins within Reaper to achieve a graphic equalizer for the house speakers. My last show, I was running sound effects, projections, and even running lights from the one laptop. And I've seen a lot of this sort of exercise in similar improvised micro-budget shows.)

After all, some of the oldest documented theater technology was borrowed and adopted methods from Elizabethan-era sailors; rope rigging, counterweights, whistle codes. It's a natural path from there to modern techs using cell phones to communicate to back stage (instead of trying to come up with the money for the old-school hardwired headset systems). And a lot of people are using DAWs for sound manipulation or MIDI hosts for live keyboards, or (again on the other side of the technological bell curve) personal music players or similar software like iTunes for backing track and effects playback.

In fact, at some levels of theater it is considered ordinary and natural to plug an MP3 player into the sound board and hit one of those little fiddly buttons at the exact instant called for in the script. (Putting the sounds on the hard disk of a computer with sound playback software that was specifically written for performance use is the more reliable alternative now!)



Which brings us to a segue. Effects -- or more broadly, all the possibilities of both the established ideas of theatrical lighting properties and scenery, and the less widely accepted ideas of interactive technology -- are not exactly called for in the script. One even suspects that in the golden age of musicals and old chestnut standards, the Annie and the You Can't Take it With You, the writer brought the same awareness of what could be practically done with multiple settings in scenery and quick-changes in costumes to what could be achieved in the way of on-stage telephone calls and so forth.

Which is to say, most shows don't require really clever technology to get a sound or lighting effect to happen in the right spot. Between the way the script makes the timing of the effect non-critical, and the way the presentational aspect of the box set with the missing fourth wall et al makes playing a telephone sound out of a speaker appropriate and sufficient (or at least sufficiently appropriate), there isn't a need for something more elaborate.

At least, not there. When you get to more modern works, and better yet, to those experimental works that straddle worlds of dance, improvisation, performance art, et al, there are plenty of spaces to explore more complex interactions than "play back a sound effect at a specific moment in the dialogue."

Of course, these also tend to get worked out in development. Sometimes I have had a performer or a puppeteer or a musician or whatever come up to me and ask, "I'd like to have this happen when I do this; is it possible?" But mostly, a choreographer or a props person or someone sees something interesting that's already out there in the world, and the performance is designed around how that existing thing functions; designed to accommodate the existing advantages and the existing flaws.

(Specific case in point; in a production of Wizard of Oz we used light-up globes extensively. Each time these were brought out, there was specific choreography to allow each performer to turn their back to the audience and page through all the available colors in these off-the-shelf devices until they got to the desired effect for that moment in the show.)

So it seems to me that the process of using the kind of effect modern electronics makes possible starts from the Designer. Instead of problem-solving something that the rest of the design team would like to happen, you become the one to suggest an effect. Which means you are effectively working out of what already exists or is known to be possible, rather than working from something that wants to happen on stage and developing a solution to it.



For that reason among others, I'm not that interested in the experimental end -- in the kind of process that puts accelerometer-controlled LED strings on a dancer, or whatever. Because as I pointed out above, this is more a process of adoption than design. You more or less start with available consumer products, and you develop a way to use them over rehearsal.

My interest from pretty much when I started using electronics in theater (where the height of my technological output was sticking leaf switches around a motor-driven cam to create a hardware string light chaser), is to, well, I'd call it "sweetening."

Here's a conceptual framework from another industry. A movie is more-or-less filmed MOS. In the older days this was a technical necessity, these days it is an artistic choice. Dialog may be taken from the shoot, but the totality of the sound environment in the finished product is a created thing. This is for focus and nuance; the noise is stripped away, and the only sounds still there are those that tell the story -- and they are pushed, too, for artistic nuance and emotional effect.

And this process has already begun in theater. We can't pan, zoom, or cut; we don't have that control over what the play-goer watches. But we do control lighting and we build and even paint scenery to place the eyes and the attention where we want it and to make essential story-telling and emotional points.

And artificial sound has entered. Even in smaller spaces, even in opera, subtle reinforcement and other acoustic shaping is already taking place. In larger houses and in the musical dialog is already passed through processing to make it larger than life in the same way Hollywood takes the best the boom mics and hidden lapel mics can carry back from the live stage, marries those with ADR done in the studio, and presents the final honed and processed mix to the audience.

At the very simplest level, I think we can now have the sound issue naturally from an on-stage walkie-talkie or phonograph or bugle (or appear to), and we can have the light from a television or cell phone (or the lights of other items of technology) doing what seems natural for them to do. The history of technical theater is full of examples of sticking colored lights in empty TV cabinets and sticking speakers under chairs and otherwise producing these illusions. Well, we can do them better now -- even if that means just pushing actual video out through a length of VGA.

But at the more complex level, I think we can have a gunshot sound "right." I think we can have a sword fight with the exiting (and thoroughly fake) sword sounds of a movie. And more subtly, I think we can treat voices and footsteps so the actors sound like they are walking a marble hall instead of wooden platforming.




But many of these require wireless control. Props move. Actors move even more. Cheap wireless is one of the necessary parts to make it possible to bring this sort of realized environment, this sort of naturalistic (or hyper-naturalistic) stage environment. And it may be that Wifi and the IOT has reached a point of maturity where it can be trusted in a production context. And not just on experimental theater pieces with fourteen people in the audience, but in staid professional theaters where your equipment breakdown is seen not by mostly your own circle of friends, but by hundreds of people paying thirty-five bucks a seat.






Sunday, October 25, 2015

I'm not even sure I like programming

Of course, what I'm doing is somewhere between "scripting" and "magic incantations" -- I'm using beginner-friendly tools and I'm copy-pasting lots of code blocks I barely (or don't at all) understand. What little I do understand, I am applying with equal measures of kludge and brute force and ignorance. A Bubble Sort would be an improvement on most of my algorithms!

Anyhow.

A friend just called wanting to see if I had a suggestion to spit out a MIDI event from a detected physical event on stage (the pedal of an otherwise gutted prop piano being depressed). Yes, I do. I've built the circuit in the past. I keep meaning to put it in a nice box (and even make it available in kit form for others) but I keep getting distracted both by Feeping Creaturism, Creeping Elegance, and the uncomfortable realization that 99 times out of 100 a technological solution (even one that is available out of the box and has already been well-tested in practice) will be passed over in preference for a Stage Manager squinting at the stage and calling out "Sound Cue Go!" on headset to someone else who presses the play button.

Such is theater. We're more conservative than aerospace. If it works, we don't like to mess with it.

So anyhow, I have the guts of one of my old button-press-to-MIDI boxes on the desk and all it would take is about an hour to figure out which pins I chose to wire up or otherwise reconstruct the original code for it. It seems sturdy enough and it is well-tested -- been through at least three stage productions that I remember, being used to fire off intercom buzzer sounds for "Moonlight and Magnolias" and a baby's cry for "Into the Woods" and... some show I've already forgotten.


It isn't of course elegant. 

So I've spent the morning going back again through the perennial problem of getting no-driver MIDI to happen over USB. And, yes, the least elegant method is the most robust; buy a $40 MIDI adaptor from Guitar Center, hack it up, wire it to the Arduino's serial port and stick it in a project box with the Arduino. Once the lid is closed, no-one will know the difference.

The newest solution on the block is that the Arduino Leonardo (and some others that come after it) use a chip that speaks USB natively and it is now trivial to turn one into MIDI or HID (aka emulate a keyboard or mouse.) This is completely class-compliant and requires no special drivers or client-side software; just plug it in and most modern computers will recognize it.

I don't happen to have a Leo. I don't think RadioShack carries one and this is a project I want done this week -- Tuesday if possible.

So the most plausible option seems to be using an ATmega32u breakout board I just happen to have, and trying to get either the LUFA framework or the more complete-and-friendly Teensy working on it. And although this might be approached at least at first through the Arduino IDE, it looks probable that once the USB port thinks it is a class-compliant MIDI device I'll have to resort to going through my AVR programmer -- and quite possibly having to delve down into make and avr.gcc and other things I've nearly forgotten.


And of course I'm right in the middle of learning Raspberry Pi now. So C or the Arduino macros or, worse, the avrdude toolchain aren't on the tip of my programming "tongue" right now. Instead I've got bits of Python and lots of Linux commands filling up my brain.

I'm getting a little more comfortable on the Linux command line. That's one of the major things I hope to accomplish playing with the Pi, anyhow. I mentioned before that the GUI (I'm mostly using X, when I use it) is like a coat of paint. A thin coat of paint. It isn't uncommon within X to find yourself presented with a command line in the middle of a drop-down menu or otherwise innocuous selection box, and it is often transparent what command line, err, commands you are issuing even when you make a radio-button selection in that GUI.

So I've got "sudo apt-get" and "cd /dev/pi" and "pkill x" and "nano /dev/config.txt" and so forth all clogging up my brain.

(Among the other amusements, command-key sequences become alt-key sequences on the current Raspi keyboard, except for within the nano editor, where I have to remember to use the "window" key instead).

I'm making progress. Tried out OSMC and OpenElec (different wrappers for Kodi, basically) but neither was friendly towards further hacking; no terminal, difficult to exit X, difficult to SSH into even (I managed to confuse my MacBook into thinking someone was attempting a man-in-the-middle attack and for a while it refused to even connect with the Pi anymore. Say, who is all this security for? You trying to protect me, the owner of the computer, from taking risks I think are justified? Did or did not I pay for the thing?)

So back again (at least twice now by my count) to the Debian clone, Raspbian. This time I hand-installed the Adafruit patches and nano'd the additional lines and config files necessary (err...I said I wasn't going to try to explain Raspberry Pi processes in detail. nano is the default text editor, a utility almost as old as but far less venerated than "ed.")

So now it doesn't force X to go to the PiTFT (Adafruit 2.8" full-color backlit LCD touch-screen). I can chose to throw the GUI, images, or movies to the device of my choice. Can play mkv, mp4, mp3, aiff...but not all with the same application. Each requires different phrasing to set up the options correctly, plus more options to get sound to go to the correct port to accompany a movie. So lots of typing out "sudo SDL_VIDEODRIVER=fbcon SDL_FBDEV=/dev/fb1 mplayer -vo sdl -ao alsa *.mp4" for the present.

I have a theory now why so many people asked at so many forums how to get X back to the HDMI port, and were all given the same answer that does not work by the gurus at Adafruit and elsewhere. Here's the theory; the gurus actually have HDMI monitors. The people with the problem -- like me -- are using an HDMI to DVI adaptor to drive a DVI monitor. So my bet is that when the Pi doesn't "see" the full HDMI it expects at the other end of the cable, it throws up an error and reverts to the last-used display port. Thus going back to the PiTFT instead of switching to the HDMI as requested.

And the gurus never realize this because their gear is too good; they never experience this problem. Whereas the poor noobies are left with no working answers and get frustrated and after a few more posts, leave.

Anyhow.

I have control now. And I figured out how to resize enough of the icons and otherwise reorganize X so I can sort of work with it on the tiny 2.8" display. So I have a couple of options to make transportable, battery-powered control of the Pi. My aim is to make a Headless Pi, but reality is I'll probably have to go in and tinker with stuff anyhow.

I also have some options for my eventual goal of playing mp3s off a USB stick. Unfortunately, the plug-and-play options -- aka various flavors of Kodi -- aren't easily compatible with the 2.8" display. So I'm looking now at either a lot of tinkering to get various third-party projects adapted to my particular hardware, or rolling my own. Either option is going to require a lot more time staring at the command line. And going deeper into programming for the Pi, too.


Saturday, May 17, 2014

History in Gear

I was always a tinkerer with music. Took a couple composition classes in college but I'm basically a self-taught piano-player who can't even sight-read. Like a lot of people of my generation and general interests I was attracted towards movie soundtracks, and the tonal colors and multitude of instrumental lines that is practically impossible for a solo performer to emulate. But I had neither the focus, the patience, nor the aptitude to study music seriously, or for that matter, to get into a band.

Got hired by a theater as Master Electrician and decided I needed to learn more about sound engineering. Mixing, in particular. And for no good reason, I started composing things again just so I'd have some multi-track material to learn mixing on.

It didn't actually help with the mixing. I learned that elsewhere. But it got me started on including original music in my theater sound designs.

More below the fold because bandwidth.


Tuesday, October 8, 2013

How to Ring a Prop Phone from QLab

Opening Remarks


There are new plays being written every day, but so many of the plays in our repertoire are older (if not actual old chestnuts.)  Between the aging subscriber base and the desire for familiar pleasures, you can be sure that in most theaters you work at, you will be doing "Charlie's Aunt" at some point.

Which means that although we've moved on in our own lives to men without hats, jackets without vests, carriages without horses, lighting without gas, freezers without ice delivery, in fully half the repertoire older ways and older technology are part of the action.  There are few plays yet in which appear an iPad or a tweet -- but many in which a telephone has to ring.  And I mean ring -- not a ringtone, but the good old electric clapper that was part of our lives for almost eighty years.

Theater sound design is changing as well.  I am tempted to say it might be less realistic, less diagetic, but fuller and more complex.  But that might just be the companies I've tended to work for.  Result being, you are more likely today to play a sound effect off of digital media and through speakers, and less likely to make use of the storehouse of theater tradition with its crash boxes, starter pistols, thunder runs and, yes, phone ringers.

In any case, making a phone ring is an instructive problem.  One word used around the Arduino community is "Physical Computing."  Or, as Tom Igoe puts it, Making Things Talk.  And that is the problem of getting software in the virtual world to do something out here in meatspace.

(How bizarre is it that Chrome's built-in spellcheck flags "diagetic" but not "meatspace?")

And thus, here is how I got an actual period piece of telecommunications to go through its paces once again under software control.


Physical Layer


I got lucky.  I happen to own a 1U rackmount module that puts out Bell standard voltage and ring signal (90 VAC at 20 Hz, super-imposed on a 48v DC offset).  This has been the standard almost since Alexander Graham spilled acid on his trousers (prompting him to call out, famously, "Watson, come here.")  The theater also owns a 90V 30 Hz machine (British standard).

There some cheesy ways to do this.  The craziest and most dangerous is to half-rectify wall voltage.  You then get a sort of pulse DC at approximately 60 volts and 40 Hz.  The next step up in cludge is to use a step-down transformer to bring wall voltage close to 48 volts, then switch it on and off again through relays driven by an oscillator at 20 Hz.  This works better, although it lacks the DC offset.

Better yet is step-up schemes, because these can operate from the safety of batteries or the isolation of a wall-wart power supply.  But this is not the moment to go into those (perhaps later I'll build one of my own from scratch, and document that.)



Since I had the module, all I needed is a relay to switch it.  Since it is an AC signal, I am running it through a relay.  Some reading affords that the ring signal is probably under half a watt, and this puts it within the range of a PC-mount relay.  I was lucky enough to find one at the Rat Shack with a coil voltage of only 5V (12V is a lot more common for relays.)

Since even that coil is a bit too much heavy lifting for an Arduino, a power darlington -- the old standby the TIP-120 -- is running it.  A resistor between Arduino output and darlington just for extra protection.  And, also; when a relay or solenoid is switched off, the collapsing magnetic field produces a transient voltage of inverse polarity to what was applied.  A diode is soldered backwards across the coil of the relay for just this reason; the transient bleeds off through the diode instead of attacking the transistor.

This is quick-and-dirty electronics as well as temporary, so Arduino is fine, with an Arduino proto shield to hang the wires on.  (This is a bare board with Arduino matching headers on it; they have them at SparkFun, Adafruit, and the other usual suspects.  I particularly like the one from Modern Devices myself.)


(The button you see taped to the desk is a back-up, wired in parallel.)



Software Layer


The chain of software starts in QLab, with a combination of MIDI cues, GOTO and RESET cues to set the cadence of the ring.  (New picture, showing some of the MidiPipe window as well as the Processing ap's window.)



To detail a little; the Phone On and Phone Off cues send a MIDI note event.  On is a MIDI "NoteOn" event, and Off is, well, a NoteOff.  These are MIDI cues, which you need to unlock with the MIDI license for QLab (which has gotten quite a bit costlier since the Version 1 pictured here, sorry!)

Both cues are in a group cue so they fire together automatically.  The pre-delay set for the Phone Off cue means it waits for 1.5 seconds before it actually fires.  After an even longer pre-delay, the GOTO cue sends us back to the top of the sound group again.  The actual standard US cadence is 2 seconds on, 4 seconds off.  I picked a faster cadence -- and it works perfectly with the action.

The entire group cue was dropped on the RESET cue.  Which is inside a second group.  This group has a noteOff event in case the loop was in the middle of a ring when the RESET cue was hit, and a sound cue.  So it kills the GOTO, stops all the MIDI cues, fires off a second noteOff to make sure the phone stops ringing, and then plays the answering machine message.


The next step in the software chain is Processing, which receives the MIDI event sent from QLab and sends a serial message to the Arduino:





Above is the window for the Processing ap (compiled as a stand-alone).  As you can see, it gives no options for selecting the correct ports; those identities are hard-coded.  The display text exists only to confirm everything is working correctly.

(There is also MIDIPipe working here because Processing wouldn't recognize QLab as a viable MIDI source).

The key functions here are at the bottom; the NoteOn and NoteOff function are from themidibus library; they are called automatically if the appropriate event shows up at the designated MIDI port.  When each function is called, a single ASCII character is output to the selected serial device (the associated Arduino).

The rest of this is boiler-plate; list the available ports, pick a port, instantiate a MIDI object from the themidibus class.



The last stage to the software chain is the code loaded on the Arduino itself:

Even simpler code here (and a lot of it is leftover cruft from a different project and doesn't actually do anything here).

We're using the hardware serial port and the Arduino serial library.  It simply checks on every program loop if there is a character waiting on the serial port.  If I had saved to a string, I'd need to flush that string on detect, but in this case it just has whatever character is present (usually "null.")

When the right character shows, the relay and a blinkenlight are activated.  Since the outputs toggle, they remain in the state set until the loop is presented with the appropriate serial signal to turn them off again.

I added a button to perform on-board testing and over-ride the Processing end, but never got around to coding it.





Sunday, September 29, 2013

Snow Crash

Well, the crazy system I am running "Drowsy" on finally let me down.

It has been a show full of fun.  Opening night the main rag slipped the lower pulley and barely made it in and out, and the work lights were left on, spoiling the opening blackout.  One night the follow spot got locked out of all cues and they had to turn it on manually.  Which was good, because the next night the "robo-light" (some moving light, I don't know the make or model) didn't light, and they had to cover for it with a manual follow-spot.

I'm running all the sound through my laptop.  Well...everything but backstage and conductor's monitor.  Us old-school techs are scared of depending on a computer.  I've seen a BSOD in a booth.  I've had to restart a Mac a couple of times, too.

Here's the set-up; wired microphones along the proscenium line, four wireless belt-packs on actors.  All plugged into a Mackie 1602 mixing board.  Then the group outputs of the Mackie are run into a MOTU firewire interface and into the computer.

In the computer, Reaper takes the different buses (proscenium mics, wireless mics, off-stage chorus mic) and processes them with compression and graphic equalizer.  (Plus there's a little corrective EQ done in the MOTU itself with its on-board DSP.)  Then Reaper exports to the primary firewire outputs, which are plugged directly into the house mains.

The other outputs of the MOTU are being sent to effects speakers.  QLab speaks to those.  And as I mentioned in a previous post, QLab is also generating MIDI events which are translated into a serial signal via a Processing sketch, and sent to an Arduino that switches the practical ringing phone on and off.



Saturday I had no signal on the vocal bus.  Same night we lost the moving light.  And, of course, it was the night I had friends in the audience...!

There's no intermission.  There really isn't a spot in the show where it was safe to reset the systems or otherwise do anything more than the most conservative problem-solving.  So I routed all the mics to the one working bus and worked with that.  Which didn't sound anything as good, but got me through the show.

Part of the problem was, I couldn't send any useful diagnostics to headset, and the average vocal material was too low to tickle the meters.  As it turned out, I still had the chorus mic bus, and that would have helped me zero in on the problem.  But the only times I had a hot enough signal to trace via metering, were times I didn't dare do anything that might kill the signal.

Well, following the show I could.  And it turned out...Reaper was fine.  The computer didn't crash.  Even the MOTU was fine.  The problem was on the group faders on the Mackie.  For some obscure Mackie reason, if the button to assign a group back to the main bus gets some corrosion in it, the signal out of the unique group output goes dead as well.

All it took to restore the sound was pressing the button a couple of times.  Today I sprayed the button, and the show went flawlessly.

And, yes; the contrast between not having the computer properly in the loop, and having the corrective EQ and dialed-in compression I'd set for the vocal mics....well, it was a huge difference in how transparent the reinforcement was.  So I am prepared to say this was a good way to do it.


Wednesday, September 18, 2013

ET Phone Home

In The Drowsy Chaperone the MAN IN THE CHAIR goes out of his way to explain to the audience, "Yes, records" before he puts one on the phonograph.  Oddly, though, he doesn't mention that he is using a dial telephone, and an answering machine with those little tape cassettes.

I am living in future shock once again.  When I started in theater sound phones were common and easy to find.  Not any more!   

Anyhow.

Today I got the QLab-controlled phone working.  But I haven't decided if I really want to trust that particular Heath Robinson for the run of a show.  The original reason for the contraption was that the Stage Manager was going to be running sound cues, and I didn't want her hammering at the button of a Tele-Q in the middle of a cue sequence.  Thus a system that will start the phone ringing as a loop with one push of the GO button, and stop the loop with a second push (which also cues up the answering machine sound).

I was also hoping to save having to run a wire for the 90 VAC ring signal; going with electronic control would mean I could send data backstage instead, through the existing snake.

But things change in tech.  The Stage Manager is no longer running sound cues, and whilst cleaning up the old wiring I threw a chunk of recycled garbage cable up anyhow.  So I already have a button and I've been using it in rehearsal and, really, it works fine as is.  The only reason to continue with the electronics is to prove I could do it.



Start in QLab.  With the MIDI license, QLab will create and send a MIDI event.  I picked, arbitrarily, note #64, with noteOn for a ring and noteOff for silence.  A GOTO cue (another one of the special cues unlocked with a QLab license) sends the pointer back to the first cue again, and pre-delay on the various cues sets the ring cadence.

(The Bell standard is 2 seconds on, 4 seconds off, but I'm using a faster and more theatrical 1.5/3).

It took a bit of experimenting, but the way to end the loop turned out to be putting them all in a group cue, ticking "go to next cue" in the group cue settings, checking the "continue" box in each of the MIDI cues and the GOTO cue.  Then, I dropped the entire group cue on a RESET cue, which turns off the GOTO and exits the loop.  Which auto-follows (the "continue" check-box ticked) to a second noteOff MIDI cue; this way, no matter where I am in the loop, hitting the "Phone STOP" cue will turn off the bell.

Had things gone according to plan, the Stage Manager would have been using this pair of cues to control a sound effect while I finished wiring the actual phone up.



Anyhow.  QLab does something non-standard in MIDI and it doesn't always show up as a MIDI source to other applications.  I'd had this problem before.  So instead it is patched into the handy freeware MIDIPipe.  This creates a new virtual MIDI source called "MIDIPipe Output 1."



The next application in the string is a clickable (Java virtual machine) application written in Processing.  Using the MIDIBus library, it detects the MIDI note events sent from QLab.  Then it writes a "p" or a "o" to a selected serial port, depending on whether it saw a noteOn or noteOff. 

It is pretty much a garbage ap at the moment.  I hacked it up from the Wiz software, stripping out the XBee signaling and pasting in basic MIDI functionality from a MIDI Bus example program.  It has at present no way to select ports other than re-compiling.  Heck, it doesn't even NAME the ports it is using.



The older generation of Arduinos use an FTDI chip to show up as a virtual serial port over USB.  This is what I'm using here.  There is an Arduino on the other end of a USB cable, and when it sees a "p" or "o" at the serial port it sets the output level of two pins.

One pin holds an LED and is there as a blinkenlight.  The other is running a 5v relay via the venerable TIP120 power darlington.  The only other components are a resistor on the input of the TIP120, and a diode backwards across the coil of the relay (to protect the rest of the circuitry from the field-collape induced surge.)

And, finally, the relay closes the connection to the steady 90 VAC at 20 Hz from a industrial-strength piece of rack mount gear I happen to have lying around.  If I had to generate the entire ring signal from scratch this would all look rather different.  As it is, all I have to do is switch an existing voltage.

And at that point the actual dial phone on the stage rings, just as if it was forty years ago and still in ordinary use.



An annoyance I'd expected but hoped to avoid; QLab breaks the "Reset" cue every time the program is closed and re-opened.  Next performance, all the "Reset"s were still listed and looked normal, but none of them worked.  I had to delete the old ones and write new ones.  For every single phone cue.  And it looks like I have to do this each and every performance.  That is going to get old fast.

Tuesday, August 6, 2013

Non-Linear Playback; The Evolving Story

Seemingly all the sound effect designs I do these days have some non-linear elements.  But at the same time, the majority of the show is still effect by effect; often called and/or executed by the Stage Manager, and presented in one long list.



"The Wiz" (which is running now) is one of these standards.  My Sound Assistant is on headset, taking each cue from the Stage Manager.  I've built in dips (fade cues and restore cues) to take down the sound effects during songs, and these are also being called as lettered cues by the Stage Manager.

The one exception on this show is also the first time I've had someone other than myself executing an improvised effect.  To wit; the wind effect for the twister is meant to be artificial, to be more like a performance on a synthesizer patch.  Which it is; my Sound Assistant has a tiny Ozone keyboard by him, and he performs the wind effect each night.



This last weekend I opened and closed "Starmites."  Four-performance run.  In that case, once again, the majority of the cues were presented in linear order; they were programmed into QLab, run off a laptop, and the Stage Manager herself was pressing the "Go" button.

I had a second laptop at the FOH mixing position.  There, I had copies of several of the sounds, several background loops, a foley-type effect and an electric guitar patch.

Taking these in order; the tech was abbreviated and the cast sometimes uncertain of their actions.  So there were a couple of sound effects I had duplicated so if the kids jumped a scene, or we'd messed up and forgotten to put in a sound, I could fire it from my keyboard instead.  The nature of these sounds (mostly magical attacks) is such that it wouldn't be a big problem if we accidentally both played the sound at the same time.

Which in fact did happen -- but as always I had thrown master faders for the sound effects on to the top layer of the mixing desk, so it was an easy matter to fade out the duplicate manually.

The background ambiance cues were on my computer because of the harried tech.  Even though I had the sounds built, it was simply too much to add them to the Stage Manager's book during what were already difficult technical moments.  These were low-level looping background effects anyhow so it was fine to just add them in to taste from the keyboard with one hand whilst I mixed the show with the other.

The guitar was there because the MacGuffin of the show, "The Cruelty," is basically an evil electric guitar.  We were hoping the band would do some guitar stuff as the prop was revealed, but that never quite happened...so I did some random fumbling live on a nice crunchy patch with a lot of echo (and a ton of bending).

The foley....first time I remember doing something like this was for "Honk!" where there was a whole bit about a man with squeaky rubber boots.  So I threw boot squeaks on to two keys of my sampler and followed the actor as he walked around.  

This was a similar gag.



So far, however, the only actor-triggered effects I have had were the Duck's "Universal Remote Control" for "Click Clack Moo," and a pistol used in a production of "Tis a Pity..."   The latter used a 424 kHz radio link to trigger a QLab sound cue.  The former was using a quick Processing sketch to interpret an XBee signal and play back a sound.

Oh, and an intercom buzzer for "Moonlight and Magnolias."  That was a strange compound cue; the practical switch on the prop intercom was detected by an Arduino that spit a MIDI message all the way upstairs to where Sound Cue was running on a PC.  Usually, the secretary was on "the other end" which was an actress backstage on a microphone that was fed into the same speaker.  But at one point she "connects" several other callers, which were pre-recorded voice-over sessions -- and these were played back over Sound Cue as Stage Manager "called" cues.

I've now worked on that signal chain so I have now a custom Processing ap that reacts to various inputs from battery-powered XBee radios and spits out a MIDI signal that can be picked up by QLab or by a sampler or Max patch.  I've used it with a Staples "Easy" Button modified for XBee wireless link, and with a basic accelerometer setup on a wrist band...but neither has yet been in a show.

Next on my programming chores is to add the ability to select a sound file for playback from within the Processing ap, to allow skipping the MIDI step entirely for simple shows.  But at the moment, that is my state-of-the-art in non-linear sound.


Thursday, June 20, 2013

Wizardry

So now it is official.

I have several projects I've promised to work on, with a deadline already looming.

1.  Put LEDs on a hat.

2.  Put a strip light on voice control.

3.  Create a gestural interface.

4.  Keep working on my POV wand if I get time.


The first is a no-brainer.  My "Cree" 3W RGB's arrived in the mail, as did several meters of 6mm light pipe.  I can do this with some 1 watt resistors (I've learned THAT lesson!) but I'm holding out for the constant-current drivers I also put on mail order.  I don't have any strips or really many discrete LEDs, though.  I was pricing some nice 50ma greens in Pirahna style cases at Digikey, though.  But I think there's enough time to wait and see what the hat looks like before I have to make another shopping trip.

This might just be hard-wired.  The most complicated part of it might just be the power button.  But...I am fully prepared to go RGB with it, and do some chases or something.

At the moment I am lacking most of the "DuckNode" system I've been trying to design over the past few years.  Which is to say; I don't currently have a plug-and-play solution to send wireless control out to the stage and into a costume for an effect like this.

(The local Orchard Supply Hardware just got some vintage style oil lamps in stock.  I'm pretty strongly tempted to stick a Cree, an ATtiny implementation of my custom "blink" code, and an XBee for hand-rolled remote control into one of them.  Except the oil tank is small enough I'd probably want to use a LiPo for power, with a USB jack and a charging circuit.  And that makes this an $80 lamp....)



2.  I picked up an 8-channel EQ display chip, which should do just fine for a voice control.  Except it spits out analog data serially, which means you pretty much want a micro to control it, and I'm starting to run out of free AVRs (Arduino or not).  I may just have to solder up another protoboard with an ATtiny on it, and do more pretend-it-is-an-Arduino clone because I just don't have time to think my way through straight C and the AVR toolchain.

The tough part is controlling anything more than a strip of LEDs.  I've been reading up on triac dimming, and I'm really not wanting to do hasty work around raw AC.  So the best options at this point for controlling lights at line voltage are;

     a.  Power Tail.  This is a plug-and-play relay box for 15 amps.  At 10ms cycle time it isn't going to do PWM, but I'm willing to bet that timing a pure off and on is going to be close enough for a Dalek Eyestalk effect.

     b.  Cheap Dimmers.  American DJ type baby dimmer packs are as cheap as ninety bucks on Amazon.  Only four channels at a whopping 5 amps each, but it is a solid metal case and more-or-less UL listed.  Also the one I'm looking at will take a 0-10 analog control signal, which is easy enough to fake up with an external power supply and some TIP120's.

     c.  Learn to talk DMX.   It has been done on the Arduino, and there are even libraries.  It isn't trivial -- in several ways it is more messy than MIDI.  Of course, some packs you can talk to with MIDI as well but spitting a LOT of MSC (MIDI Show Control) at an ETC light board is a good way to crash the board and stop the show.  So, yeah...I'll pass on the odd MSC for a single effect, but trying to do a PWM-type effect that way would be even more stupid than wiring up my own TRIAC-based danger shield.


3.  And this one gets interesting.

I've already shown I can detect a punching motion (which is a possible "Tim the Enchanter" magic gesture.)  Not sure exactly how best to detect a snap of the fingers, if that's how the actor wants to go. 

Hold on.

Ouch ouch ouch.

Yes...I can detect the jarring motion of a snap of the fingers with an accelerometer mounted in a wrist band.  Or attached to a wrist with masking tape.  Ouch ouch ouch.

The tougher part would be discriminating -- better, specifying -- which of several DIFFERENT effects is to be triggered via pointing at them whilst gesturing.

The simplest solution is a compass.  Or, rather, a tilt-compensated triple-axis magnetometer.  Which aren't that expensive, but only talk I2C.  Which means I can't talk to those naked, either.  I'd need an AVR to read off the magnetometer chip, and then tell the XBee to transmit the signal to the interpreter.

Which in any case brings me squarely back to the DuckNode, and the Processing host software I have only begun to write.  (Current status?  I can select available ports on the fly, and recognize individual pin status).

Fortunately, #3 is completely stretch goal.  First I need to get that voice-activated light working on the breadboard.  Maybe I'll even dare some 120v relays -- just for a proof of concept, mind you.




Oh, and I realized something about the POV circuit.  The persistence of vision effect only takes place over 250 milliseconds.  But if a lighting effect is much brighter than the ambient light, you will also get afterimage.  Which can linger quite a bit longer.

In daylight, my POV circuit isn't particularly good.  In a darkened room, I "see" clearly a swath that crosses most of my body. 

So it is still plausible.  The next iteration I'm moving the LEDs closer together, find some way to diffuse them a little (they are a little too much brighter on-axis now), and increase their number.  I'm also finding that arbitrary patterns read better than words.  But at some point I really need to write a Processing routine that will turn a bitmapped image into a binary string, because I'm getting pretty tired of counting rows by hand as I manually type in 1's and 0's.

Tuesday, June 18, 2013

De-Bouncing

So you have this effect.

In a recent effects meeting, we were talking about a lighting effect attached to a costume.  If, we said, you put electrical contacts in the costume gloves, you could light the light by holding your fingers together.

That's clever.  It works well enough.  But make it two lights, and it starts to get complicated.  That's two gloves, two sets of contacts, and if the actor forgets themselves and presses their fingertips together...you've cross-circuited.  Which may be the all-purpose problem solver on Star Trek The Next Generation, but works out badly in the real world.

This is yes another case of being unable to uncouple in your mind the physical trigger (call it a button, or call it a sensor),  and the resulting effect.  If all you are using is basic circuits -- battery, light, switch -- then this is indeed how it works.  But...an awful lot of theatrical effects -- including lighting effects built into costumes -- are already complex effects.

Which is to say; they already include intelligence.  And if your costume or prop already has an embedded micro-processor chasing a string of LEDs or counting down on a 7-segment display, it is just plain silly not to break out another pin for control input.

And, actually, circuits are cheap. A switch needs to be mounted to something solid, and needs to be soldered in.  Solder in a stand-alone capacitance sensor breakout board, and you have a latching circuit for your light.  The same number of wires to solder, it is no more expensive than a good switch, AND it doesn't need a good surface to be mounted on.  And it has no moving parts to break or snag.

More discussion, and more rant, below the fold.  No circuit diagrams this time around, though.



Tuesday, February 19, 2013

Back to Processing

I'm moving away from hardware MIDI.

9/10 of the time, the MIDI signals generated by my various gadgets are going into a laptop anyhow.  And as I gain confidence in programming, I think it is easier to write client-side software than to try to do everything on the embedded processor of a black box.

This seems like it is going to be the form of my DuckNodes.  On the Node end, LiPo battery and charge circuit, monitor/feedback LEDs, XBee.  On the master end, Xbee node with FTDI adaptor plugged into a USB port -- and the rest is software.

Sure, I could dress up the receiver with some indicator blinkenlights.  I'm also tempted to make it in the form of a Diesel-Punk USB stick.  The first show I did using software at the receiver end, I stuck the receiver XBee inside a rubber ducky -- a yellow plastic bathtub duck -- to protect it.

(That's the actual hardware in the picture there; one Xbee inside the television remote, one inside the duck, with the latter connected by USB cable to the Processing sketch visible on the desktop.)



This is a bad time for me to be buying parts.  So this will have to do to construct the prototype DuckNode; a Modern Devices accelerometer, XBee (not XBee Pro), and AAA batteries.  I need to read up on the chip, though, to find out the voltage tolerance and so forth.  But for the nonce I'm working on the software layer, and for that my XBee-equipped Easy Button is sufficient.



Sunday, February 17, 2013

Tools

I'm between shows.  No design deadlines, and no props with deadlines either, now that the Maverick is shipped.  So this might be a good time to build some tools.


I've been having great fun with my wireless EasyButton.  Used it to run the lights on one (simple) show.  Used it a couple of times to check sound cues as I walked the house.  Has only been in one show so far -- or, rather, the guts were, transplanted into a television remote and used by the actor/narrator to "control" the action of the play.  (For that show, the XBee triggered a sound effect in a small Processing ap running on a laptop, which plugged into the sound board).


I've had even more fun with my simple MIDI button (this spits out MIDI over the standard 5-pin DIN connector when a switch contact is closed).  It has been in an orchestra pit twice to allow the percussionist to trigger a sound effect.  Another time it was wired into an intercom box to play a buzzer sound.  The flexibility of this is that the trigger is just a button or sensor; the resulting command is interpreted, usually by QLab, meaning we have total control over what kind of sound, where it is placed, etc.


I've been trying to dream up a new device along these same lines, that is a plug-and-play solution for any number of sound/lighting/effects coordination problems.  Something I could have in my bag, that would hook up quickly with minimum wiring to a variety of inputs/sensors; something we can use to explore the kind of dynamic cue'ing new technologies have made possible.


Monday, February 4, 2013

Old School

We've had a bunch of different music directors.  It keeps things interesting, as each has their own idea about some of the ancillary tasks.  Several in the past brought in a group of friends to form the band and as a result delighted in jamming together for preshow, and even at intermission.  Lots of rock covers, jazz standards, and eclectic things like, for instance the Muppet Show theme.

Others either didn't realize it was allowed, didn't think it was appropriate, or didn't have a group of musicians with enough confidence in each other to assay that.  In fact, the last several shows in a row have been rather quiet for pre-show.

Many of our music directors have been keyboardists, and as such, usually brought their own.  When there was a second keyboard, they supplied that too -- or brought along another gigging keyboardist who owned several keyboards they liked and used.

For those that weren't, they usually made do with our upright piano.  The upright sounds good and we tune it before every show, but the main drawback is the height; it is hard for a music director to see over it.  For Sound of Music we rigged a video monitor -- the band was behind the set anyhow.  For the juniors shows (8-13 years) we usually have a minimal pit so we make do with upright piano, drums, usually bass (but for Peter Pan we had a wind player instead).



Carnival was a mixed bag; a teen show, meaning more ambitious than the juniors, but not quite the level of technical support of the main stage shows.  Our music director settled on two keyboards, drums and bass -- the latter two had played for us several times in the past.  We had an exceptionally brief tech -- three days to block, tech, and final dress two casts, and this was respecting school hours, too.  That meant there was really no time for experimentation, and barely time for discussion.

I know our music director could have come up with a couple of keyboards if need be.  But what got thrown into the hat was that they'd use the Privia from the rehearsal hall (a Casio digital keyboard) as the piano, and I'd supply the second keyboard.  What with schedule and so forth I ended up setting up the pit myself, selecting the base patches for keyboard II, and basically hoping they'd be able to use what I'd set up.

And, with surprisingly little adjustment, we did.

The Privia of course sounds ghastly.  And it doesn't even have output jacks.  You can connect to the mini-jack headphone out but it is fragile and it turns off the internal speakers meaning you have to add a monitor so the keyboard player can hear themselves.  I've mic'd the speakers on a past show or two.  It delivers a kind of C-80 piano sound, very noisy -- actually, it has a very cool burry tone to it when used with the electric piano patch, and I used that trick (a condenser mic an inch above the tiny built-in speaker) to advantage during Oliver!.

Fortunately, it is just expensive enough a keyboard to have MIDI jacks.


So I used double-stick tape to fasten my old Korg P3 Piano Module to the keyboard stand, ran a MIDI cable to that, and ran the output to the sound system.  The keyboard player used the internal sounds of the Casio and had complete control over her speaker volume.





Second Keyboard took us more fully into the retro past.  First off, the only controller keyboard I had to offer was my old Roland W30 workstation.  This is a keyboard/sampler/sequencer so old the operating system boots off a 3 1/2" floppy disk.  It also has a sticky A3; right in the middle of the keyboard.

For sounds, the better-sounding option would have been to loan my aluminium Powerbook running Garritan Personal Orchestra via the supplied Aria Player.

But in keeping with the theme, I instead hung my Roland M-OC1 Orchestral Module on to the MIDI output of the W30.  (Of course, I could have set up a custom patch disk on the W30, but that would have taken time and probably not sounded as good.  Probably!)

The Ochestral Module is a rack-mount (meaning, a metal box 19" wide, plus flanges) dating from the 90's.  It is a break-out of one of the expansion cards (aka, a ROM card) for Roland's flagship keyboard, the JV80. 

It dates from the heyday of what Roland called LAS (linear arithmetic synthesis).  What is basically means, is that you start with real samples from orchestral instruments, process the central part of the tone and combine it with pure electronically-generated waveforms until you have an unchanging tone that can be looped continuously for as long as you need.  Then you combine it back with a snippet of the attack (the noise transients at the start of most musical notes.)

(In late LAS synths there were four slots for what they called "partials," meaning you would actually play several samples simultaneously, and/or cross fade from one to the other either over time or as you changed registers.  It was all in all a rather flexible approach that led to fairly rich sounds.)

After that it is standard synth-era tricks; velocity and key-dependent filters and volume envelopes so the sound changes in nature as you go up and down the scale, and as you play softer and louder, and oscillator-driven similar filters and VCOs to apply a sort of artificial vibrato.  Modern philosophy is much more sample-oriented, and tends towards fixed-length samples with minimal looping, the actual transients, even the actual tails (as in, the sounds after stopping a note).



The rest of the band was a little more "modern."  The bass player brought in his own pre-amp/DI for his piezo pick-up, which sounded lovely (not all DI's are equal, and piezo pick-ups make this even more obvious.)  I had prepped only eight snake channels to the pit, and I was maxed out at the console with just six inputs -- any more and I'd have had to add a submixer.  So drums started with my generic overhead; a CAD GXL3000 set to omni.  In this case, moved as close to the xylophone as I could get it and still be out of the drummer's way.

I originally had a snare mic taped to the hat stand with spike tape.  It sounded good enough for this particular show I kept the angle after bringing in my stands.  Aiming at the side, low, from about four inches away and about a 35' angle.  I now understand why people mic both top and bottom of the snare drum!  (I still don't understand top and bottom of the hat).  For this show, the snare side gave the most of the circus feel, I thought.



But.   There was no time to tech, the house sound of course changes utterly when you put bodies in it, and I got a really nasty cold right as we went into performance.  I had the shakes, could hardly hear, and was struggling through the show to hold back a hacking cough.  Needless to say I blew a few entrances.  Also needless to say, that kiboshed any chance of really developing the sound of the show and bringing it to where it could have been.  And for that I am disappointed.

Sunday, December 9, 2012

The Bassoon's Story

Sometimes an idea takes a long time to tease out to the point where it is complete enough to plan a project around.  Other times, the pieces fall into place quite rapidly.

I've been wanting to write some music again.  While working "Nutcracker" that desire got stronger; I was listening to Tchaikovsky's masterful employment of the symphony orchestra and felt inspired to get wrangling with those elements of tone color and combination again.  And then I listened to the Oakland Symphony and it pretty much decided me.

I've had a few vague projects in mind for a while.  One being a secular oratorio in echo of Hayden's great "The Creation"; one that explores instead the progression, excitement, and mystery of our scientific understanding of the universe and its origins.  Other ideas have been more inchoate.

When I opened my notebook tonight, the only two germs of idea was either to write something, or several somethings, to two requirements picked out of a hat; as in "write something in WALTZ time featuring PIANO,"  or "Write something in KLEZMER style suggesting a WASHER-DRYER COMBO."  Or, to write something featuring a solo instrument as a character in a story.

Then the pieces dropped into place.  A fugue or maybe a duet...use the "Harper" melody I wrote a long time ago...older adventurer talking to young dreamer...took an arrow to the knee...

Unfortunately, the idea as it exists currently is a bit, um, ambitious.  Not as much as a 3-hour oratorio, mind you!  But what I'm seeing now is a piece of "program music" written for (synthesized) symphony orchestra, telling a story in four parts and lasting seven minutes (if not longer).

1) Adventure!  Swashbuckling, epic battle stuff of the adventurer (the bassoon? tenor clarinet?) at his prime.
  //arrow to the knee// (orchestral percussion)
2) "Oh, tell me of the road" A young would-be adventurer (flute?  Oboe?) asks for stories.  Becomes a duet/counterpoint.
3) On the road to adventure/the flute's story
4) Crisis, rescue/re-introduction of bassoon, finale

(5 -- love duet and waltz?)

From experience, this is 2-3 weeks of writing.  I don't really have the time to delve into this now, but that might actually be an advantage; I could use the time developing the musical and harmonic ideas more completely instead of leaping into orchestral arrangement.

Monday, September 24, 2012

I Sold My Synths Today, Oh Boy...

(To tune of "I read the news today...")

Actually, I didn't sell them.  I put them up on eBay, and I'm betting I'll have to relist at least once, and at the end of it I'll still own one or two of them.




I had a pretty good rack going at one point.  All Roland rack-mount synths, topped by a Roland sampler keyboard.  Plus one Korg piano module (the venerable P3).  It was a bit of a maze of cables, what with MIDI daisy-chains and all the audio connections to my Mackie mixer and a couple extra insert reverbs.  Even more mess when you add the Octopad and pedals.

It took a bit of time in OMS and other applications, too, getting all the patch names entered and the routing organized so everything showed up correctly.  And adjusting the internal reverb and chorus setting during a song remained a bit of a pain, since that was all System Exclusive Message stuff.

But, basically, once the rack was set up, I had most of the instruments I needed right at my fingertips.

This is the advantage, still, of hardware synths; that you can audition patches in a fraction of the time.  No waiting for stuff to stream off the hard disk into RAM.

And there was a subtler artistic advantage as well.  Back in the rack days, megabytes of patch memory was considered big.  This meant, generally, fewer samples spread further.  And shorter samples as well, looped instead of left to play out the full development of the tone.  So realism was less.  But playability was higher. 

So compare this to a real instrumentalist.  They can put lots of character into their tone and inflection.  This is what makes real instruments, well, real.  But at the same time a skilled player can modulate her tone to blend with the instruments around her (or to seat correctly in the band/orchestral mix).

Whereas a synth doesn't have this skill.  It plays the same kind of note all the time (unless you, the keyboardist/programmer/arranger, alter it).  Which can be done, certainly, but takes more time.  And often more CPU cycles.  So the simpler, pared-down sounds actually made you more productive.  The music was a bit less realistic but you could write it much faster -- as fast, really, as you could write for humans.

Oddly enough, my old rack was actually superior in polyphony as well.  A couple of reasons.  First is that my present computers aren't particularly fast.  Another is that patch size -- and the necessary CPU cycles for data handling -- has gone up by magnitudes.  A minor point is also that sequencing on MIDI hardware promotes patch-changes.  Software synth is almost always done with new instances for each new instrument, even if it will only play a single note.

I did push the polyphony on my rack, of course.  One always does.  I was doing orchestral stuff on it and I was doing it classically; instead of playing block chords into a "String Orchestra" patch, I'd play basically monophonic lines (and some double-stops) for each part; First Violins in two or more divisi, Second Violins, Violas, 'Cellos, Double Basses.  So if a section had two simultaneous notes, I'd have two different instances of a patch that sounded like less than the full section, and each would play monophonically.

And I was using fingered tremolo as well!

(One nice thing about patch switching is if I had some of the strings switch to pizz, I'd actually change patches on that MIDI channel and that dedicated sequencer track.  Which added to the realism; you couldn't just have a couple of bursts of pizz in the middle of an arco section, but instead, you had to think about what the real players were doing.  Same for, say, the trumpets picking up their mutes.)




A composer friend of mine settled on his own hardware rack (organized around a D50) several years back.  He's modified it a little over the years but the huge production advantage to him has been that he knows the sounds and can go to them without a lot of time wasted auditioning patches (or worse yet, editing them!)

My directions were always different.  I would hear the sound first in my head, and do what it took to approximate it with the tools I had.  I did go back to a small collection of favorite patches over and over, but I would also do things like edit a D-110 handclap sample to make it higher pitched.  Or route a flute to one of the special outputs so I could run it through a hardware reverb unit.

Going into software synthesis allowed me to move towards writing for smaller ensembles with much more control over the precise tone.  When I do software synth now, most of the outputs are routed through various effects processors as well as contouring EQ and the like.

But it is much slower.  And I miss those days of just being able to dial up my usual set of strings (I even saved this as a blank sequence) and start writing.




With the rack dismantled, many of the pieces I wrote in the past only exist now as audio files.  It is too much work to try to adapt what I could do with the patches in the rack to what I have available on my (somewhat thinner and very much more eclectic) virtual rack.  I have a few of the old ones in a folder on my DropBox right now.

But, really, I took up arranging music largely to get used to mixing, and now I have plenty of opportunities to mix actual live musicians.  So I don't do anywhere near as much music, and most of that is for specific designs.


Sunday, May 20, 2012

Quick Hack

Makers Faire was this weekend and for once I didn't have to work a show.  I went, I wandered, I bought a couple parts, but mostly I listened to music, pedaled a generator at the pedal-powered stage, and drank the only beer they sold (although I was dying for some decent German beer).

Was house tech over the weekend for a show that brings their own sound and light operators.  Except for one early-morning showcase performance.  When they handed me a CD and told me they had a couple of simple lighting cues.  Problem; the sound board is still at FOH position (actually, rear of the house, but the important part is, it is NOT near the light board).

So pulled some toys out of the bag to be able to run the light board remotely.  There's one MacGuyver episode where he has to build a telescope from random lenses and he has maybe five minutes to do what took Galileo years.  But as Mac puts it -- he already knew it could be done.  Same with the lighting console -- I'd SEEN another group use one of the MIDI functions to control the ETC "Expression" board.  I just hadn't done it myself.

Opened the manual.  The Expression speaks MSC (MIDI Show Control) and will take a "Go" in that format.  Ran a MIDI connection through the audio snake that was already there via my hand-made pair of MIDI-to-XLR adapter cables.  I remembered QLab had some kind of drop-down menu of MIDI control sequences already written for you -- I opened up my fully-registered copy of QLab 1.0 and looked around.

The board didn't seem to be seeing the commands, but then after I'd tapped at the button a few times (trying different options) I noticed the lights had changed.  Turns out you had to deliberately press and hold the "send a message now" button within the QLab interface before it would spit out the full-length MSC.

Now, since it was already in my bag, I pulled out my Arduino-based MIDI message generator and the XBee-modified Staple's Easy Button.  And now I could trigger the "Go" button for the next lighting cue from basically anywhere in the theater.



It's a hack, and a bit of a chain; the inside of the Easy Button currently holds not just the XBee node but is wasting a perfectly good USB Explorer as a break-out board.  I haven't gotten around to putting in a new breakout board (and boost converter) as replacements.  When the Easy Button is pressed, the XBee node sends a radio message with the changed status of pin D0.  The receiving node toggles the output level of its pin D0 in response, and the Arduino it is connected to detects this as a switch closed (aka +v is present on a pin that is otherwise pulled to ground.  Or is it the other way around?)  The Arduino, which at least is in a nice box, debounces the "switch/sensor" input, creates a NoteOn event, and sends it out the serial port at 31250 BAUD.  A standard MIDI connector picks this up, creates a newly formatted message via USB, and this triggers a cue within QLab.  The QLab cue creates a new MIDI event -- an MSC "Go" command -- and shoots that OUT the same USB MIDI adapter.  That runs through two adapter cables and sixty feet of audio snake to reach the back of the lighting console.

But, the point of the demonstration is, this worked.  And more importantly than that, it worked off-the-shelf.  I had the components already, and I didn't even have to go inside via the USB connection and write new code to one or more of the components to do this particular task (which, if I had, could have removed the laptop and its MIDI adapter from the chain of connections).

And this is what I've been striving at with all of my theatrical gadgets; to have things that I can pull out of the gig bag and hook up in a few minutes to solve a problem.  Or to so something new that hasn't been done before, thus enhancing the fun and the creative options of a show.


And today I needed to do a touch-up focus on stage and I had no assistant to run the board.  At least this theater has an RFU (Remote Focus Unit), but it was still faster to write a couple of cues, each containing just the system I needed to touch up, then hook up my Wireless Easy Button again.  And then I could move from light to light without having to run back to the board each time.




Wednesday, April 25, 2012

Not So Easy

Well, I wired my Easy Button with an XBee. After all that hassling over reading serial data in Processing, I realized it was more appropriate for this project to use the simpler I/O Line Passing Mode of the 802.15.4 "Series 1" XBee nodes.

The way I/O line passing works, the status of any enabled digital or analog inputs on the transmitter will be echoed by the corresponding digital pins or PWM pins on the receiver.


At that point it was an easy matter to stick the programmed XBee node inside the Easy Button, and wire it to the original button.
I experimented a bit with power supplies as well. The XBee nodes are designed to run on 3.3V although the I/O pins are 5V compatible (which is handy for interfacing with micros with a 5V bus). They are radios, and thus sensitive to voltage fluctuations, but according to the datasheet they can handle a range of 2.8V to 3.4V -- this meant I could use the nominal 3.0V of the pair of AAA batteries the Easy Button was already provisioned for.

(Alternatives ranged from sticking a hard-to-get to battery pack inside to installing a complete charge circuit for a lithium-poly battery. However, the LiPo's run as high as 4.1V when fully charged, meaning to protect the XBee you'd need to add a voltage regulator. The regulator provided on the commonly available XBee breakouts from Sparkfun and Adafruit drop about a volt, however, meaning the usable charge of the LiPo is severely limited. I have ordered a low-dropout regulator from Digikey, but I'm not sure I want to spend the time wiring up an LDO as well as LiPo charge manager!)

(Yet another alternative is to use a 3.3V boost power converter, that ramps up and regulates the output of a pair of AAA cells...all the way down to the output of one nearly-discharged cell (something like 0.8 volts) before it finally gives up. I have one of these on order as well!)

Oh, and I already had the other end:


My venerable Arduino-based MIDI event generator had just finished a show connected to the Easy Button -- at that time, the button was hard-wired via the terminal block you can see in the photograph. Since the terminal block presents both +V and ground, it was simple to power an XBee Explorer off it (which has an onboard 5V to 3.3V regulator), and when my pin D0 is pulled low on the transmitter (internal resistors pull the inputs high), it pulls one of the digital pins of the Arduino low via the XBee link.

And it all worked great on the bench, and I soldered up everything and closed up the button.

And then I did the plugs-out test and it failed.

Battery usage was fine...it went past twelve hours on a fresh-charged pair. But the transmission was very much line-of-sight; at ten feet, it couldn't even transmit through my own body if I held it behind my back.

This is as against the original proof-of-concept, when I got 120 feet of range through several set-pieces.  The primary flaw appears to be the nature of the I/O Line Passing Mode itself. It appears to transmit only when the status of the digital input line changes (which makes for great battery life, at least). This means that not only is there no re-try, but the I/O pins can lose synchronization and have to be "reminded." Which is NOT good behavior for a theatrical effect.

However. I've gone back into the transmitter and set IR to FF -- that is, forcing the node to take a sample of its inputs (and transmit the results) every 255 milliseconds. And I'm reading up on sleep modes, which should accomplish the same thing with a better power economy. This has increased the robustness of the transmission enough to where I can transmit reliably at 20 feet and through one wall. I probably need to juggle settings on the receiver as well to make sure it resets the outputs within a quarter-second of any break in radio contact.

I don't know yet if the batteries can handle this greater transmit volume, but as of this moment I've been operating for one hour and 7 minutes on the same pair of batteries.



I believe the more robust option is going to have to be full serial transmission, for which I need to add an AVR (and which means I finally have to learn how to use the UART on an ATtiny). Other options are going to regulated power (and supplying the transmitter with the full designed 3.3 volts), using wire antenna instead of chip antenna XBee models, and upgrading to XBee Pro (which have much greater output power and spec at over twice the effective range).

In the meanwhile, I could try to use this current unit, by locating the receiver close to the stage. But the show has been rehearsed already with a visual sound cue instead of an actor-operated one, and there's no real point to changing now (and even less reason to introduce the risk of new technology).



The batteries made it past ten hours in the new mode.  On further reading, and tentatively confirmed in testing, when IR is set at other than the default of "0" the node does wait for an ACK and re-try up to three times if it doesn't get one.  And I lowered my sample interval to about 100 millseconds.  And also set a 100 millisecond automatic reset interval on the receiver's pins using the Tn command so if it loses contact with the transmitter it won't "hang."

In the theater, it made it from the stage to the back of the house but only when there was a clear line of sight.  Next step, then, is to swap out the existing node with a Pro node instead. 



ADDENDUM TWO:  with an XBee Pro node installed, I was able to achieve reliable trigger from stage to sound booth.  In fact, it triggered semi-reliably from all the way back in the dressing room.  My first test was actually from orchestra pit to lobby -- I had the parts spread out on the Concessions counter while I was putting in the Pro node, and I draped a wireless mic over my laptop computer then went down and listened to the computer through the band vocal monitor!

As I said in an earlier post (Theater Should Not be a Hackerspace) although it works and the actor admired it very much, it will not be used in this show.  We are already rehearsed with taking the sound cue manually.  But now the button is ready for any application down the road where I need to hand a button to an actor or band member or Stage Manager to trigger an event wireless-ly.

Thursday, March 22, 2012

Somewhere West of Java

So the Processing language and IDE is an artist-friendly wrapper for Java. You know Java? That C-like language that is hardware agnostic, meaning the same program will run on multiple environments?

Except not always. The handling of MIDI events was broken in the Mac implementation of Java. You know the Mac, right? The artist's machine? Only Apple in all their wisdom decided to cripple MIDI when they transitioned to OSX. With a lot of third party aps and hard work you could re-create the lost functionality of Opcode's OMS, and get back a system that actually listed the patch names of your outboard equipment again.

But then Apple goes and loses the MIDI functionality of Java around OS 10.4 or so. It doesn't come back until a later Java release -- that won't run on anything less than OS 10.6 -- but not all the way; there are still a few broken functions. There is a third party library, mmj, which in some implementations on OS 10.4 (and perhaps even OS 10.5) will work.

Of course even then you are left with the need to build a library for PROCESSING, to "Processing-ize" the MIDI functionality in the raw Java. Oh, to be an actual programmer instead of an artist-type, so I could just write it myself. Or, better yet, go straight to CoreAudio and handle it there, without involving Java.




The nice thing about a Java ap is that as long as the Java virtual machine is properly installed on the user's computer, you can stick a custom icon on the Java ap and it will look just like an ordinary double-clickable application.

And Processing makes the whole thing even one step easier, when you are working with a "sketch" in the friendly IDE, which turns your twenty lines of kindergarten code into something that will actually run...and then you can play with the finished Java ap and make it into a practically stand-alone application.

With libraries like proMidi and controlP5, all the heavy lifting is done for you, and even an idiot like me can make a functional serial-to-MIDI translator.

Or, I could, if I had an Intel Mac. Unfortunately I'm still stuck in the PPC world. I have five PPC laptops that see almost constant use, being used by me or loaned or sometimes even rented to work as sound effects playback, multi-track recorders, video playback and virtual instruments. One of them just came back from being a harp in a pit orchestra (played from a Yamaha Clavinova), one is currently playing back sound effects in a booth, and my main machine is being a drum sub-mixer (an improvised solution I really hope not to have to resort to again!)

Which means, in short, although I intend to do more with Processing and MIDI, I'm simply going to wait until my hardware is updated. Because even if I could get it to run on my hardware, what I want to be doing is making solutions that will run on the average -- aka modern -- machine. It doesn't help any in the long run to have a piece of software that requires OS 10.4 to run.



What I really could have done with, is to not have to spend two days of frustration over compiler errors to finally discover all this.

Oh, and I have an interim solution. Hairless MIDI (yes, that is what it is called) is a serial-to-MIDI converter that works on, well, my lone OS 10.5 machine at least.

With that, I was able to confirm transmission of a MIDI command from desktop to laptop over the airwaves, and trigger QLab remotely to play back a cue.