A rambling post here -- just finished a rather crazy tech and performance schedule (three casts in rotation, each with their own director). As I'm getting my breath back, thinking about that show experience, about my next designs, and also using the time to finally read the excellent Yamaha Sound Reinforcement Handbook from cover to cover, I'm thinking about non-linear artifacts and methods.
The first is in reference to that crazy recent show. A young cast (well, three casts) that frequently went off the rails -- forgetting where they were in a scene, jumping over entire songs, etc. And we had a very technical production with music, sound effects, flying scenery, multiple costume changes, lighting effects, fog, etc.
We were also drastically under-rehearsed (not enough time in one week to fully tech three casts) and we didn't even have decent show communications (well, we had coms, but as a solo show mixer and sound effects operator I couldn't afford to be on headset, and neither is the pit usually on headset).
This meant that we had four different departments having to guess, from experience and dramatic instinct, whether to take or jump a cue -- and where the other departments (not to mention the actors on stage) were going to go. Say the actors appeared to be lost and the scene was stalling. The music director might chose to jump forward to the next song by starting to play the intro. And that meant scenery and lighting had to scramble to arrive at the same spot as well.
It made me want even more of a non-linear system than I had.
There were for that show, as is typical these days, two linear playback systems involved in sound. The first is the mixer. To get through running up the correct wireless microphones for the correct scenes I was using programmed snapshots on the console (Yamaha LS9 series). This was dependent on the actors wearing the same microphones from performance to performance (which they didn't always!) and actually managing to make it out for the scene in question (instead of getting stuck in the dressing room). It also put me in a position where it was harder to work out of sequence.
Whenever I did get the wrong mic, I had to first chase down which one it was. Paging through channel by channel on PFL (pre-fade listen) is slow -- it can take you the length of a song just to find the one mic that belongs to someone not actually in that scene.
You can cheat a bit with the monitors. Given good level indicators (which the LS9 has) you can spot a crackling mic from the meters alone. And it is often clear who is talking loudly in the dressing room and who is singing in a scene -- young actors rarely put as much vocal energy into the latter as they do into the former.
But that almost compounds the problem. If you are just running on faders alone, you can make changes on the fly. But when you add memorized snapshots, you have to anticipate what the next snapshot will do to the evolving picture. And often that means a combination of muting channels pre-emptively, going into scene memory to jump over upcoming snapshots, or even physically holding back motorized faders so they don't go to their memorized positions.
I am at this point unsure of the best options. As has been said, sound boards (in theater use) are a decade behind lighting boards in terms of software. There are options -- sneak, park, track among others -- that lighting boards offer that do not have an equivalent in sound boards. But even those would put you even more in the position of having to brainstorm a complex sequence of commands just to get the following events to happen in the way desired.
Effects playback is a similar problem. This is actually a problem that starts in rehearsal; because rehearsal is all about doing scenes out of order, and doing the same scene (or a single moment in the scene) over and over, you need the ability to quickly change where you are in the playback order.
Add to this, of course, that especially with increasingly technically ambitious shows, and decreasing time on stage in technical rehearsal (because time costs money), that same rehearsal time is often as not development time as well. So your rehearsal playback mechanism also has to deal with changing the order, volume, or even nature of the cues in question.
I'm finding I do a lot of work during rehearsal with QuickTime. I cover my desktop in multiple instances of the QuickTime player, each holding a completed cue or portion of a cue I am developing. The problem is that it isn't as fast as it could be, and it certainly isn't as smooth, to hop between different windows starting, stopping, and changing the playback position and playback volume.
For complex cues in development, I have CuBase open and I play back from there. But this means that in a complex rehearsal I may have to flip between several different applications in order to produce the sounds I want.
I find these workable for trying out ideas quickly and seeing how they play in the space, but less good for trying to replicate a performance over and over again to help the actors in their rehearsal.
Qlab, where the final show is to be installed, is mostly set up in a linear way. You can jump over cues or go back, at least. Qlab also gives you another tool that can be extremely useful; the ability to hot-fire cues by assigning them to a keyboard button.
This can be a bit of a test for the memory, however! A better option is to integrate a MIDI keyboard with Qlab. This, at the moment, is my prime answer to non-linear playback of already developed cues.
First, I assign hot-fire cues to MIDI note numbers. But there is an even better trick; assign a Sound cue to the NoteOn event and loop it, and assign a Stop or Fade cue to the NoteOff event. This turns Qlab into a bare-bones sampler; the sound will continue playing as long as you hold the key down. And you can play as many simultaneous sounds as you have fingers.
In production, I label my MIDI keyboard with tiny strips of white board tape attached to each key. Lately I've been doing this with ideograms; say, a little sketch of a horse head to remind me that this is the key that plays the "neigh!" sound effect.
The second non-linear trick using a MIDI keyboard opens up with Qlab is to go into the Preferences pane and select "use remote control." Then you can assign several keys to play the next cue, jump back a cue, or stop all playback. This is MUCH faster than trying to mouse over to the right place on the desktop (AND it works whether Qlab is highlighted or not).
(This is why one of my Golden Grail projects has been a dedicated Qlab playback controller -- a small surface with buttons that spits out the appropriate play-rewind-stop MIDI events to Qlab, and frees the user from having to interact with the keyboard and mouse of a laptop during the run of a show).
For certain kinds of cues nothing beats live performance. For this, I use mostly a shareware sampler called Vsamp; I load in the sound effects and set up the various mapping and velocity curves and so forth. And this gives me a MIDI keyboard playback with full control over the volume (plus a few other tricks, such as pitch bend for a quick-and-dirty Doppler effect).
For the show I just closed, I added another live playback trick to my bag. The inside of Monstro the Whale was represented by strips of fabric being manipulated by the actors to look like he was breathing. I ran a microphone through my handy Lexicon-200 effects processor (detune effect) and performed the breathing live.
Actually, I wasn't sure I could do a decent sneeze every night, and the pact at which the kids moved the fabric was about to make me hyperventilate, so I recorded most of my performance and translated that into Qlab hot-fire cues. But I still performed some of it live, and the microphone was there in case the performers got REALLY off and I had to quickly do something very different from what was programmed.
And, well, I was going to talk about noise in sound and music, and the way the search for the good sound and the good effect is as much about finding the right noise as it is finding the harmonic content...but this post is long enough.