I do a choral show a couple times a year and I usually take a recording off the board as a favor to some friends of mine in the chorus. I always caution them that what the board hears is not what the audience hears. I don't tell them that setting up to do the recording, and cleaning up the results, takes quite a bit of time.
The needs of a live show and a recording are largely orthogonal. In a large venue, you can split the feeds (which you are going anyhow for the sake of Monitor Beach). And everything is mic'd anyhow. In a small space, a good part of the act is acoustic -- which means a lot of the performance never gets close enough to a microphone to get captured by it. And in a small intimate space, you don't want to be sticking microphones in front of everyone and everything.
Well, I just got done working a show in which recording was intended from the start. As it happens, I put a very, very little bit of the vocal microphones into house speakers. But basically the audience was on its own; they were getting the concert through air, not through wires.
The downside is I basically had to do this with my own small assortment of gear.
Presonus Firewire as the main A/D converter. That's only 8 channels. Using the ability of the Mac platform's Core Audio to create an Aggregate Device, I added another two channels from the line in. I also set up a second laptop, with a USB recorder as back-up, taking 48/24 audio (96/32 kept crashing) from a flying, mix-as-you-go stereo mix-down.
Over a long brunch at a local cafe I made up a channel plan. This was a chorus. I knew they had a keyboard. I planned ahead for bass and drums even though no-one had told me about them. A day before the performance I found out the keyboard wasn't electronic; it would be a baby grand.
Same difference. The rough plot was all stereo busses, sent to the hard disk recorder as five pairs; two chorus mics, two piano mics, a stereo mix-down of the drum mics, and a pair of mics set up at a distance.
During the tech I found out they also had a violinist, and the band was all playing percussion toys for one number. So I added an omnidirectional mic in the middle of the band area, and a violin mic. I also saw a number of places in which one member of the chorus would come forward to do a solo, and, concerned about localization (with them passing between the choral mics) I added a stand mic in the center.
Since I didn't have the spare record channels I mixed the center mic into the stereo bus of the choral mics, and the violin mic into the stereo bus of the piano. I figured if the added signal got too hot, I could always invert one of the channels to sum the center out. Of course, there is no similar trick for boosting the center...!
So already I was being a little foolish and trying to hold on to elements of the original plan that weren't working. After two performances I bit the bullet and re-configured. Since the stereo piano mic'ing wasn't working anyhow I took that down to a single mono channel. That freed up a channel for the violin. And since the drums were hardly playing (and it wasn't that great a kit anyhow) I gave up on stereo drums and also summed that to mono. That gave me a spare channel to isolate the center vocal mic.
Because as the concert progressed and I was doing my stereo mixes, I found there were several places where I needed that center mic to be on its own. So it made sense to pull it out of the bus.
I switched the direct audio in for the USB connection and better A/D converters of an Ozone, but the Mac had trouble with that...at the end of the concert I discovered the rear mics had not been captured in the multitrack. Switched back to the Mac's own input jack. A videographer showed up, and they adjusted some of the lights. Final concert. Something in that changed setup was now sending dimmer hum into my rear microphones every time the lights changed.
Well, SoundSoap seems to have gotten rid of the worst of the hum (at a noticeable drop in quality however). And now I can finally work on the multi-tracks and see if the choices I finally gave myself are going to let me mix the show the way I want it.
(And, actually, the 10 tracks of recording wasn't the worst limitation. I barely had that many microphones that were worth using, and I used up all of my personal cable and most of the house cable to boot just connecting them. Plus that basically maxed out the board -- once you added stereo house feed, three reverbs, and a monitor feed. I think I had two outputs I wasn't using, in addition to the S/PDIF which -- as it turns out -- can only be switched on if you are using a PC to record from. The Mac version of the driver never got around to including that button -- not on that model, anyhow.)
Tricks of the trade, discussion of design principles, and musings and rants about theater from a working theater technician/designer.
Pages
▼
Thursday, December 22, 2011
MIDI and the Arduino : Part II
Now we start to get complicated. Before we can move on from "Hello (MIDI) World," however, we need to understand a little more about what we are dealing with.
Turn the Way-Back Machine to the early 80's and the rise of synthesizer-based music. Yamaha was riding high on their flagship DX-series FM synthesizers, and Roland was moving in fast with LAS -- the first of the true sample-manipulation synthesizers that now dominate the virtual instrument market. And over several meetings, the leading electronic instrument manufacturers got together to talk about integration; about a single standard to allow machines to communicate with each other.
Dave Smith was the pioneer here. He introduced his fledgling "Musical Instrument Digital Interface" at AES in 1981, and by the 1983 NAMM show he was able to demonstrate a connection between one of his own Prophet synths and a Roland JP-5.
MIDI came out of the speculative academic environment of experimental music; out of a background that was re-thinking the shape of interfaces (such as Don Buchla's unique controllers), the role of the synthesizer in music (from Wendy Carlos to Kraftwerk), even the role of the musician (Brian Eno building on the work of John Cage and others). This heady experimental atmosphere led, I believe, to the construct of MIDI as an open-ended and extensible language.
It would have been so easy to make MIDI dogmatic; to restrict it, for instance, to only describing events within the beat and tempo parlance of Western music. (And, yes; working with microtonal music is not quite as transparent as it could be in MIDI!) But, still, the very simplicity of what the language provides allows it to flexibly encompass all manner of events that might not otherwise be encompassed.
MIDI does not describe sound. It describes events. Although there are additions to the language that can specify timbres more closely, or even load in samples, interpretation of the events is still up to the machine receiving the MIDI message. It is sheet music for computers.
This also makes it extremely flexible for other uses. In my current MIDI-controlled servo for instance, noteOn is interpreted as a command to move, to a position specified by the note number, and at a slew rate specified by the velocity.
The MIDI specification -- as it has been amended and expanded over the years by the MIDI consortium while still maintaining full backwards compatibility -- includes as do most connection protocols a hardware layer and a software layer.
The heart of the traditional MIDI hardware is the opto-isolator. MIDI gear is designed so the OUT port of one piece of equipment drives an opto-isolator behind the IN or THRU port of the next piece of gear. An opto-isolator is an LED stuck in a black box with a photo-transistor. This isolation means there can never be a ground loop between pieces of MIDI gear*, and any voltage spike will blow out a $2 part instead of the whole synthesizer.
*technically. Standard practice is to ground the shield/pin 2 at the transmitter side, with the other end of the cable ground left unconnected, but not all manufacturers follow suit.
Before you make any electronic connection to a piece of MIDI gear, remember this; the IN port is expecting the voltage to drive an LED (5v TTL; it is expecting 3-5 volts for a "1" and 0-2 volts for a "0.") The OUT port is expecting to see the load of a a small LED. So don't connect a motor to a MIDI OUT port and expect it to run for very long before something breaks! (According to the spec, the current in the loop should be 5 mA. Most gear is capable of sinking somewhat more than that.)
The only two pins connected on the 5-pin DIN connector (a connector type standardized by the Deutsches Institut für Normung) are pins 4 and 5. Polarity matters, because like all LEDs the opto-isolator is one-way.
The rest of the physical layer is, and I paraphrase Wikipedia on this: A simplex digital current loop electrical connection sending asynchronous serial communication data at 31,250 bits per second. 8-N-1 format, i.e. one start bit (must be 0), eight data bits, no parity bit and one stop bit (must be 1).
Actually, the take-home here, besides the fact that communication is simplex and asynchronous (aka one-way, with no shared clock signal or other handshake needed), is that the normal state of the system is digital 1 -- which corresponds to no current flow.
When the system encounters current it calls that a start bit (the 0), and then parses an 8-bit word starting with the first edge it encounters. Technically, you could sleep the transmitter for quite some time between words, as each byte sent is uniquely interpreted.
As it happens, 8-N-1 is the default of the built-in UART on the AVR at the heart of the Arduino, therefor the serial out on the Arduino naturally sends the correctly formatted signal. All that is necessary to get an Arduino to send a bitstream that will be interpreted as a potential MIDI message is to set the BAUD rate to 31250 (which is easily done within the Arduino IDE via the "Serial.begin(31250);" init.)
On the receiver side, it is technically possible to connect the Arduino's serial port, but it is strongly not recommended. An opto-isolator will provide a lower current drain for the transmitter and protect the Arduino as well. Plus the physical layer assumes you are inverting anyhow. I'll get into how to wire the opto in a bit here.
Because of the simplicity of asynchronous simplex it is entirely possible to bitbang an acceptable MIDI message on an AVR without a built-in UART. That's basically what Software Serial does anyhow. I'll be documenting my experiments in bitbanging MIDI from cheap through-hole AVRs in the future, but for now just ignore all that stuff about framing bits and parity and just think of MIDI messages as a series of ordinary 8-bit words.
8 bits are a byte. 8 bits are enough to do simple ASCII (not expanded ASCII). It also works out to 2 characters in hexadecimal. And reading "0Fh" is a lot easier than reading "00001111b." For technical and historical reasons MIDI is generally documented in the terms of hex pairs. If you are going to get down and dirty with MIDI commands, you have to get used to switching back and forth between hex, decimal in 0-9 and 1-10 formats, and a bit of binary now and then.
The format of almost all MIDI messages is "Opcode Data (Data) (Data)...."
In the case of the ubiquitous NoteOn message, this works out as;
First byte; "NoteOn for channel 15"
Second byte: "for the F# above Middle C"
Third byte: "with a velocity of 127."
Messages come in all different lengths. The System Exclusive message, for instance, can be multiple megabytes of data -- as long as it is bracketed by "System Exclusive Begin" and "System Exclusive Ends" (which includes a check-sum).
System Exclusive, by the way, is one of the open message formats that made MIDI so readily expandable. (Another major one was NRPNs).
Just a reminder here. When we are working with a nice friendly platform like the Arduino, what that noteOn message above translates as is three consecutive one-byte integers. Think of it as;
Serial.print(159, BYTE);
Serial.print(66, BYTE);
Serial.print(127, BYTE);
And, yes, you can do exactly this and it will be interpreted as a legitimate MIDI message. The only reason to play around with variables is to give yourself the flexibility to actually compose useful messages.
MIDI is a channeled system. Although some of the opcodes are global (meaning they are addressed to all devices on the network), most messages are prefixed with the channel number. This allows multi-timbral performance off a single MIDI stream; channel 10 might be handling drum sounds, channel 1 a piano sound, channel 2 a bass, etc.
In the example code I gave in Part I, I arrived at the correct channel-designated opcode by adding the raw opcode (144 for noteOn) to the channel number. A different way of looking at this, however, is that the 16 possible channels are expressed using the last four bits of the first byte sent. The top four bits are the opcode.
Thus, a noteOn for channel 15 is 9h, Fh. Or 10011111 in binary -- 159 in decimal.
This is why thinking in hex pairs can simplify your work when you are dealing with channel messages; just remember the channel is the low nibble, and the opcode is the high nibble.
In the case of data bytes, the high bit is unused. This limits the possible values to 0-127.
Here's the opcodes in binary format as listed by the MIDI Consortium: MIDI Messages As I said, if you are going to work with MIDI in the raw form, expect to be going back and forth between hex, binary, and decimal a bunch!
Here would be a nice place to point out that not all devices interpret (or send) all MIDI messages. If you have your hands on a manual, there will be a couple of sheets in the back, around the index, that specify what messages are used by that device.
So now that you know how to make a connection to the physical layer, and you've seen how to format a message in the software layer, it should be fairly straightforward to write software routines that send useful signals.
In one of my recent projects I needed to compose just a simple "Go" command in MSC (MIDI Show Control, an expansion of the basic MIDI 1.0 spec for use in theatrical applications). Instead of writing something elegant in software I just wrote a stack of Serial.print commands, each containing the actual binary needed;
Serial.print(10001110b);
etc.
For the four-button remote control MIDI device I was using for a time, the note number was determined by a scan of the open buttons, using a simple matrix;
for (button = 0; button<5; button++)
{
pressed[button] = digitalRead(inp[button]);
if (pressed[button] == HIGH)
{
noteOn(channel, button + 60, velocity);
}
}
That's a bit pseudo-code up there; don't use it as is.
Another necessary trick was polyphonics. What I did, was add a flag variable; sounding[button]. Whenever a noteOn was sent, I'd set the flag to know I'd already sent a note event for that button, rather than re-sending every time the loop polled the button status.
And that's long enough for this entry, except I want to make note of another peculiarity of MIDI messages. And that is Running Status.
Running status means that if you have sent the opcode for a noteOn event, you can follow it with more than one data pair and they will be interpreted as additional notes. This is extremely useful for continuous controller messages, which otherwise would be rather longer.
The format here is;
Opcode (noteOn channel n)
status1 (note)
status2 (velocity)
status1 (another note)
status2 (the velocity for that second note)
etc., etc.
The receiving device will continue to accept these as noteOn events until it sees a new opcode.
And now it becomes clear why all opcodes have a leading "1," whereas all data bytes have a leading "0"!
Turn the Way-Back Machine to the early 80's and the rise of synthesizer-based music. Yamaha was riding high on their flagship DX-series FM synthesizers, and Roland was moving in fast with LAS -- the first of the true sample-manipulation synthesizers that now dominate the virtual instrument market. And over several meetings, the leading electronic instrument manufacturers got together to talk about integration; about a single standard to allow machines to communicate with each other.
Dave Smith was the pioneer here. He introduced his fledgling "Musical Instrument Digital Interface" at AES in 1981, and by the 1983 NAMM show he was able to demonstrate a connection between one of his own Prophet synths and a Roland JP-5.
MIDI came out of the speculative academic environment of experimental music; out of a background that was re-thinking the shape of interfaces (such as Don Buchla's unique controllers), the role of the synthesizer in music (from Wendy Carlos to Kraftwerk), even the role of the musician (Brian Eno building on the work of John Cage and others). This heady experimental atmosphere led, I believe, to the construct of MIDI as an open-ended and extensible language.
It would have been so easy to make MIDI dogmatic; to restrict it, for instance, to only describing events within the beat and tempo parlance of Western music. (And, yes; working with microtonal music is not quite as transparent as it could be in MIDI!) But, still, the very simplicity of what the language provides allows it to flexibly encompass all manner of events that might not otherwise be encompassed.
MIDI does not describe sound. It describes events. Although there are additions to the language that can specify timbres more closely, or even load in samples, interpretation of the events is still up to the machine receiving the MIDI message. It is sheet music for computers.
This also makes it extremely flexible for other uses. In my current MIDI-controlled servo for instance, noteOn is interpreted as a command to move, to a position specified by the note number, and at a slew rate specified by the velocity.
The MIDI specification -- as it has been amended and expanded over the years by the MIDI consortium while still maintaining full backwards compatibility -- includes as do most connection protocols a hardware layer and a software layer.
The heart of the traditional MIDI hardware is the opto-isolator. MIDI gear is designed so the OUT port of one piece of equipment drives an opto-isolator behind the IN or THRU port of the next piece of gear. An opto-isolator is an LED stuck in a black box with a photo-transistor. This isolation means there can never be a ground loop between pieces of MIDI gear*, and any voltage spike will blow out a $2 part instead of the whole synthesizer.
*technically. Standard practice is to ground the shield/pin 2 at the transmitter side, with the other end of the cable ground left unconnected, but not all manufacturers follow suit.
Before you make any electronic connection to a piece of MIDI gear, remember this; the IN port is expecting the voltage to drive an LED (5v TTL; it is expecting 3-5 volts for a "1" and 0-2 volts for a "0.") The OUT port is expecting to see the load of a a small LED. So don't connect a motor to a MIDI OUT port and expect it to run for very long before something breaks! (According to the spec, the current in the loop should be 5 mA. Most gear is capable of sinking somewhat more than that.)
The only two pins connected on the 5-pin DIN connector (a connector type standardized by the Deutsches Institut für Normung) are pins 4 and 5. Polarity matters, because like all LEDs the opto-isolator is one-way.
The rest of the physical layer is, and I paraphrase Wikipedia on this: A simplex digital current loop electrical connection sending asynchronous serial communication data at 31,250 bits per second. 8-N-1 format, i.e. one start bit (must be 0), eight data bits, no parity bit and one stop bit (must be 1).
Actually, the take-home here, besides the fact that communication is simplex and asynchronous (aka one-way, with no shared clock signal or other handshake needed), is that the normal state of the system is digital 1 -- which corresponds to no current flow.
When the system encounters current it calls that a start bit (the 0), and then parses an 8-bit word starting with the first edge it encounters. Technically, you could sleep the transmitter for quite some time between words, as each byte sent is uniquely interpreted.
As it happens, 8-N-1 is the default of the built-in UART on the AVR at the heart of the Arduino, therefor the serial out on the Arduino naturally sends the correctly formatted signal. All that is necessary to get an Arduino to send a bitstream that will be interpreted as a potential MIDI message is to set the BAUD rate to 31250 (which is easily done within the Arduino IDE via the "Serial.begin(31250);" init.)
On the receiver side, it is technically possible to connect the Arduino's serial port, but it is strongly not recommended. An opto-isolator will provide a lower current drain for the transmitter and protect the Arduino as well. Plus the physical layer assumes you are inverting anyhow. I'll get into how to wire the opto in a bit here.
Because of the simplicity of asynchronous simplex it is entirely possible to bitbang an acceptable MIDI message on an AVR without a built-in UART. That's basically what Software Serial does anyhow. I'll be documenting my experiments in bitbanging MIDI from cheap through-hole AVRs in the future, but for now just ignore all that stuff about framing bits and parity and just think of MIDI messages as a series of ordinary 8-bit words.
8 bits are a byte. 8 bits are enough to do simple ASCII (not expanded ASCII). It also works out to 2 characters in hexadecimal. And reading "0Fh" is a lot easier than reading "00001111b." For technical and historical reasons MIDI is generally documented in the terms of hex pairs. If you are going to get down and dirty with MIDI commands, you have to get used to switching back and forth between hex, decimal in 0-9 and 1-10 formats, and a bit of binary now and then.
The format of almost all MIDI messages is "Opcode Data (Data) (Data)...."
In the case of the ubiquitous NoteOn message, this works out as;
First byte; "NoteOn for channel 15"
Second byte: "for the F# above Middle C"
Third byte: "with a velocity of 127."
Messages come in all different lengths. The System Exclusive message, for instance, can be multiple megabytes of data -- as long as it is bracketed by "System Exclusive Begin" and "System Exclusive Ends" (which includes a check-sum).
System Exclusive, by the way, is one of the open message formats that made MIDI so readily expandable. (Another major one was NRPNs).
Just a reminder here. When we are working with a nice friendly platform like the Arduino, what that noteOn message above translates as is three consecutive one-byte integers. Think of it as;
Serial.print(159, BYTE);
Serial.print(66, BYTE);
Serial.print(127, BYTE);
And, yes, you can do exactly this and it will be interpreted as a legitimate MIDI message. The only reason to play around with variables is to give yourself the flexibility to actually compose useful messages.
MIDI is a channeled system. Although some of the opcodes are global (meaning they are addressed to all devices on the network), most messages are prefixed with the channel number. This allows multi-timbral performance off a single MIDI stream; channel 10 might be handling drum sounds, channel 1 a piano sound, channel 2 a bass, etc.
In the example code I gave in Part I, I arrived at the correct channel-designated opcode by adding the raw opcode (144 for noteOn) to the channel number. A different way of looking at this, however, is that the 16 possible channels are expressed using the last four bits of the first byte sent. The top four bits are the opcode.
Thus, a noteOn for channel 15 is 9h, Fh. Or 10011111 in binary -- 159 in decimal.
This is why thinking in hex pairs can simplify your work when you are dealing with channel messages; just remember the channel is the low nibble, and the opcode is the high nibble.
In the case of data bytes, the high bit is unused. This limits the possible values to 0-127.
Here's the opcodes in binary format as listed by the MIDI Consortium: MIDI Messages As I said, if you are going to work with MIDI in the raw form, expect to be going back and forth between hex, binary, and decimal a bunch!
Here would be a nice place to point out that not all devices interpret (or send) all MIDI messages. If you have your hands on a manual, there will be a couple of sheets in the back, around the index, that specify what messages are used by that device.
So now that you know how to make a connection to the physical layer, and you've seen how to format a message in the software layer, it should be fairly straightforward to write software routines that send useful signals.
In one of my recent projects I needed to compose just a simple "Go" command in MSC (MIDI Show Control, an expansion of the basic MIDI 1.0 spec for use in theatrical applications). Instead of writing something elegant in software I just wrote a stack of Serial.print commands, each containing the actual binary needed;
Serial.print(10001110b);
etc.
For the four-button remote control MIDI device I was using for a time, the note number was determined by a scan of the open buttons, using a simple matrix;
for (button = 0; button<5; button++)
{
pressed[button] = digitalRead(inp[button]);
if (pressed[button] == HIGH)
{
noteOn(channel, button + 60, velocity);
}
}
That's a bit pseudo-code up there; don't use it as is.
Another necessary trick was polyphonics. What I did, was add a flag variable; sounding[button]. Whenever a noteOn was sent, I'd set the flag to know I'd already sent a note event for that button, rather than re-sending every time the loop polled the button status.
And that's long enough for this entry, except I want to make note of another peculiarity of MIDI messages. And that is Running Status.
Running status means that if you have sent the opcode for a noteOn event, you can follow it with more than one data pair and they will be interpreted as additional notes. This is extremely useful for continuous controller messages, which otherwise would be rather longer.
The format here is;
Opcode (noteOn channel n)
status1 (note)
status2 (velocity)
status1 (another note)
status2 (the velocity for that second note)
etc., etc.
The receiving device will continue to accept these as noteOn events until it sees a new opcode.
And now it becomes clear why all opcodes have a leading "1," whereas all data bytes have a leading "0"!
Aphorisms Again
Do not denigrate the benefit of looking like you know what you are doing.
First off, show business is a stressful business. The actors and musicians, not to mention the directors and producers, have a lot on their shoulders. So don't add to their tension by making them doubt the sound will work. Keep your doubts to yourself. You know the solutions you came up with are compromises. But the actor doesn't need to be thinking about that when they are on stage.
Of course looking like you know what you are doing is nice from a business standpoint as well.
Looking like you know what you are doing is nice for the audience, too. It makes them feel as if they are in good hands, and will be presented with a worthwhile night's entertainment. That is, assuming you deliver! Otherwise you risk having them think to themselves, "All of this equipment, and the sound still sucks."
Don't underestimate the placebo effect. Sometimes an audience member may wonder if they are hearing the third guitar. And they'll glance at the microphone set up in front of it and decide that they must be. What they don't know is that the guitar sounded horrible or the mic blew a gasket or for whatever other reason it isn't connected to the sound system. But you leave it up there because it looks cool, and it convinces the guitar player and his friend in the audience that you are serving him.
Don't get me wrong. I'm not advocating dishonesty. But in most cases, expressing more confidence than you have is a good thing. We are engineers, after all. Engineers know how many things can go wrong. But from a statistical standpoint, the sound will probably work. So you burden yourself with the fears of all those things you know about and no-one else does. And let the performers get on with the task of worrying about the things they know about.
Escape the temptation of perfect optimization.
Very few problems ever have a perfectly optimal solution. You can go crazy -- and you can waste a whole lot of time -- weighing too-similar alternatives. Just pick one and commit to it. Any loss you suffer because the alternative actually was slightly better will be, in most cases, completely offset by the time you saved by going ahead anyhow.
No plan survives first contact with the enemy. But the right plan will keep you alive long enough to come up with a better one.
Recognize that what you intended to do is not what you will finish up with. When you finally see the real set, when you finally hear the actual band, you may realize a lot of your work was unnecessary. So be it. It is foolish to try to hold on to work that isn't needed anymore, and even more foolish to search for justifications of why you still need it. It is also, more subtly, foolish to blame yourself for wasting time. If you front-loaded the work as much as possible, you wasted that time when there was more of it available to waste. And, beside -- you learn as much if not more from what didn't work, as you learn from what did.
The trick is to stay flexible, to not get too emotionally attached to what you thought you were going to do. And to design in such a way so you will have that flexibility when you need it.
This happens through doing many things, large and small. Document, and self-document; when you have to make a quick improvisation, it helps immensely if you can figure out what the existing system does. If you need to grab a cable quick to use somewhere else, you really, really want to know you are grabbing the right one; the one you no longer need.
Piling everything up in one huge mass with no labels and no organization and no documentation sets you up for failure in every case but the rare one when the event actually does unfold exactly the way you expected it to.
Have spares. When you are under the stage, run a second cable just in case. When you are running a power line, break it out with a strip just in case you need to plug something else in there. When you run out cable, dress the slack at the business end, just in case you have to move the microphone. Cover your bases, and anticipate having to make changes.
You can't use the air above you, the runway behind you, or the fuel that's back in the truck.
Bring all the gear to the gig. It's always the one piece of gear you were sure you didn't need, that you do.
Leave a spare snake cable. Test the mics before the curtain opens. Load in the night prior if they give you a night prior; don't count on getting it all done at the last minute.
You get the picture.
First off, show business is a stressful business. The actors and musicians, not to mention the directors and producers, have a lot on their shoulders. So don't add to their tension by making them doubt the sound will work. Keep your doubts to yourself. You know the solutions you came up with are compromises. But the actor doesn't need to be thinking about that when they are on stage.
Of course looking like you know what you are doing is nice from a business standpoint as well.
Looking like you know what you are doing is nice for the audience, too. It makes them feel as if they are in good hands, and will be presented with a worthwhile night's entertainment. That is, assuming you deliver! Otherwise you risk having them think to themselves, "All of this equipment, and the sound still sucks."
Don't underestimate the placebo effect. Sometimes an audience member may wonder if they are hearing the third guitar. And they'll glance at the microphone set up in front of it and decide that they must be. What they don't know is that the guitar sounded horrible or the mic blew a gasket or for whatever other reason it isn't connected to the sound system. But you leave it up there because it looks cool, and it convinces the guitar player and his friend in the audience that you are serving him.
Don't get me wrong. I'm not advocating dishonesty. But in most cases, expressing more confidence than you have is a good thing. We are engineers, after all. Engineers know how many things can go wrong. But from a statistical standpoint, the sound will probably work. So you burden yourself with the fears of all those things you know about and no-one else does. And let the performers get on with the task of worrying about the things they know about.
Escape the temptation of perfect optimization.
Very few problems ever have a perfectly optimal solution. You can go crazy -- and you can waste a whole lot of time -- weighing too-similar alternatives. Just pick one and commit to it. Any loss you suffer because the alternative actually was slightly better will be, in most cases, completely offset by the time you saved by going ahead anyhow.
No plan survives first contact with the enemy. But the right plan will keep you alive long enough to come up with a better one.
Recognize that what you intended to do is not what you will finish up with. When you finally see the real set, when you finally hear the actual band, you may realize a lot of your work was unnecessary. So be it. It is foolish to try to hold on to work that isn't needed anymore, and even more foolish to search for justifications of why you still need it. It is also, more subtly, foolish to blame yourself for wasting time. If you front-loaded the work as much as possible, you wasted that time when there was more of it available to waste. And, beside -- you learn as much if not more from what didn't work, as you learn from what did.
The trick is to stay flexible, to not get too emotionally attached to what you thought you were going to do. And to design in such a way so you will have that flexibility when you need it.
This happens through doing many things, large and small. Document, and self-document; when you have to make a quick improvisation, it helps immensely if you can figure out what the existing system does. If you need to grab a cable quick to use somewhere else, you really, really want to know you are grabbing the right one; the one you no longer need.
Piling everything up in one huge mass with no labels and no organization and no documentation sets you up for failure in every case but the rare one when the event actually does unfold exactly the way you expected it to.
Have spares. When you are under the stage, run a second cable just in case. When you are running a power line, break it out with a strip just in case you need to plug something else in there. When you run out cable, dress the slack at the business end, just in case you have to move the microphone. Cover your bases, and anticipate having to make changes.
You can't use the air above you, the runway behind you, or the fuel that's back in the truck.
Bring all the gear to the gig. It's always the one piece of gear you were sure you didn't need, that you do.
Leave a spare snake cable. Test the mics before the curtain opens. Load in the night prior if they give you a night prior; don't count on getting it all done at the last minute.
You get the picture.
Tuesday, December 20, 2011
MIDI and the Arduino : Part I
I am intending to do a series of in-depth posts about how you can do MIDI on your Arduino or AVR. It may take me a while, though; I have a show up this week and four more already in meetings.
This post, then, is about getting to "Hello, World."
To be precise, it is about basic output from Arduino via the MIDI hardware level.
First the hardware. I can not do better than this schematic from the Arduino Playground itself.
The connector is a five-pin din. In the MIDI world, unlike most audio cables, all jacks are female and all cables are male. Even though it is a five-pin connector, only three of the connectors are actually used.
If your local electronics supplier does not have them, and you don't want to order online you can always cut a short MIDI cable in two and make a tail from it.
The 220 ohm resistor (color band code red-red-brown) is the only passive component you need. Actually, you can do without, but it is a lot safer with it (the risk is blowing the opto-isolator in the MIDI gear you are connecting to.)
And that's the physical layer.
On the software side, MIDI Library 3.1 is at SourceForge, with a small tutorial at the official Arduino website.
Of course I wrote my own, much simpler library. All I needed was output. I'm going to try to comment my way through an example here with as little actual explanation as possible:
int channel = 0;
int note = 60; // these set up default values for the later function
int velocity = 100; // in case the code doesn't specify any
void setup()
{
Serial.begin(31250) /* this is a key line; we are using the Arduino's serial port but it has to be set to the BAUD rate of MIDI in order for ordinary MIDI devices to see it. */
}
void loop()
{
noteOn(channel, note, velocity); //here's the function call
delay(500);
noteOff(channel, note, velocity);
delay(2000); /* the way I set this up, there is no test condition for the noteOn and noteOff events; once the program has initiated it cycles endlessly, sending out a note with the default values every few seconds. */
}
void noteOn(char channel, char note, char velocity)
{
Serial.print(channel + 144, BYTE);
Serial.print(note, BYTE);
Serial.print(velocity, BYTE);
}
/* okay, this takes a little explanation. First, the "char" and "BYTE" nonsense is my (sloppy) way of forcing type conversion. It's a C thing; if you specify the kind of integer inside a statement like this, it creates and uses an image of that integer that is truncated or otherwise fit into the specified type. Weird stuff can happen with type forcing, though. The advantage here is that I'm able to state the note number and velocity the way most MIDI players will present them; as a number from 0 to 127 (middle C is 60). When you get into more complex commands it often helps to think of them in hexadecimal pairs (which is how they are presented in the index of most MIDI hardware).
The (channel + 144) has to do with how MIDI messages are constructed. The first byte sent is always a command (or part of a command). Since noteOn messages are not global, they are sent individually to any of the 16 possible channels given by the MIDI specification. Thus "144" indicates "noteOn event for channel 0," and "159" would indicate "noteOn event for channel 15."
(Yes...within the actual data stream we count from 0. Most of the front panel of your MIDI device will count from 1, however, thus channels 1 through 16 instead of channels 0 through 15.)
A noteOn message creates an expectation by the receiver that it will be followed by two more bytes of information, specifying the note number and the velocity. */
void noteOff(char channel, char note, char velocity)
{
Serial.print(channel + 128), BYTE);
Serial.print(note, BYTE);
Serial.print(velocity, BYTE);
}
/* This function stops the note. In MIDI, unless the device is a drum machine or something else that plays a single sample and stops, the note requested will continue to play until the device is asked to stop that note -- or until an AllOff command -- a system-wide command -- is received.
There are two ways legitimized in the MIDI spec for stopping a note; one is to send a noteOff command for the same channel and note number. Cut-off velocity is interpreted by some MIDI instruments but most of them ignore it. The other way is to send a noteOn event with a velocity of 0.
This latter can really mess you up when you are using a keyboard to trigger an external effect, by the way! You have to insert a filter to trap all "noteOn" events with velocities of 0 so they don't cause false triggers. */
And that's it. The above was hand-typed, and I can't guarantee it against typos.
In practice, of course, your software will be testing some condition -- a sensor trip or button press -- and sending out various kinds of MIDI signals with the defaults replaced by, say, a different note number for each button pressed or for the intensity of light falling on a photocell or whatever.
But I need to say something even more basic here. This is, as I stated in my Thompsonian "To be precise;" above, is MIDI output on using the hardware standard.
That is, this is how to send a signal from an Arduino that will go down a MIDI cable, into the MIDI IN connector on a MIDI device, and be interpreted as a command.
Which is to say; the above will play a note on a keyboard or a drum machine, can be used to trigger a cue in Qlab (given a MIDI input connected to the host computer), and may do strange things with a lighting console or sound board or DJ laser display that has a MIDI input jack.
And lastly, this is not the simplest option. The simplest option is a MIDIsense board from Lady Ada, or a MIDI shield from SparkFun Electronics. Both of these options come with tested libraries in addition to the complete hardware. Lady Ada's framework has the advantage of being able to adjust via software on a host computer the sensitivity and response curve of sensors you attach to the board. Plus it even includes a 9V battery compartment for on-the-road use! The SparkFun board is cheaper and faster to set up, and includes buttons and pots to mess around with. Both, of course, include the opto-isolator of a MIDI-in port -- which is the subject for a later entry here.
Part II is here:
This post, then, is about getting to "Hello, World."
To be precise, it is about basic output from Arduino via the MIDI hardware level.
First the hardware. I can not do better than this schematic from the Arduino Playground itself.
The connector is a five-pin din. In the MIDI world, unlike most audio cables, all jacks are female and all cables are male. Even though it is a five-pin connector, only three of the connectors are actually used.
If your local electronics supplier does not have them, and you don't want to order online you can always cut a short MIDI cable in two and make a tail from it.
The 220 ohm resistor (color band code red-red-brown) is the only passive component you need. Actually, you can do without, but it is a lot safer with it (the risk is blowing the opto-isolator in the MIDI gear you are connecting to.)
And that's the physical layer.
On the software side, MIDI Library 3.1 is at SourceForge, with a small tutorial at the official Arduino website.
Of course I wrote my own, much simpler library. All I needed was output. I'm going to try to comment my way through an example here with as little actual explanation as possible:
int channel = 0;
int note = 60; // these set up default values for the later function
int velocity = 100; // in case the code doesn't specify any
void setup()
{
Serial.begin(31250) /* this is a key line; we are using the Arduino's serial port but it has to be set to the BAUD rate of MIDI in order for ordinary MIDI devices to see it. */
}
void loop()
{
noteOn(channel, note, velocity); //here's the function call
delay(500);
noteOff(channel, note, velocity);
delay(2000); /* the way I set this up, there is no test condition for the noteOn and noteOff events; once the program has initiated it cycles endlessly, sending out a note with the default values every few seconds. */
}
void noteOn(char channel, char note, char velocity)
{
Serial.print(channel + 144, BYTE);
Serial.print(note, BYTE);
Serial.print(velocity, BYTE);
}
/* okay, this takes a little explanation. First, the "char" and "BYTE" nonsense is my (sloppy) way of forcing type conversion. It's a C thing; if you specify the kind of integer inside a statement like this, it creates and uses an image of that integer that is truncated or otherwise fit into the specified type. Weird stuff can happen with type forcing, though. The advantage here is that I'm able to state the note number and velocity the way most MIDI players will present them; as a number from 0 to 127 (middle C is 60). When you get into more complex commands it often helps to think of them in hexadecimal pairs (which is how they are presented in the index of most MIDI hardware).
The (channel + 144) has to do with how MIDI messages are constructed. The first byte sent is always a command (or part of a command). Since noteOn messages are not global, they are sent individually to any of the 16 possible channels given by the MIDI specification. Thus "144" indicates "noteOn event for channel 0," and "159" would indicate "noteOn event for channel 15."
(Yes...within the actual data stream we count from 0. Most of the front panel of your MIDI device will count from 1, however, thus channels 1 through 16 instead of channels 0 through 15.)
A noteOn message creates an expectation by the receiver that it will be followed by two more bytes of information, specifying the note number and the velocity. */
void noteOff(char channel, char note, char velocity)
{
Serial.print(channel + 128), BYTE);
Serial.print(note, BYTE);
Serial.print(velocity, BYTE);
}
/* This function stops the note. In MIDI, unless the device is a drum machine or something else that plays a single sample and stops, the note requested will continue to play until the device is asked to stop that note -- or until an AllOff command -- a system-wide command -- is received.
There are two ways legitimized in the MIDI spec for stopping a note; one is to send a noteOff command for the same channel and note number. Cut-off velocity is interpreted by some MIDI instruments but most of them ignore it. The other way is to send a noteOn event with a velocity of 0.
This latter can really mess you up when you are using a keyboard to trigger an external effect, by the way! You have to insert a filter to trap all "noteOn" events with velocities of 0 so they don't cause false triggers. */
And that's it. The above was hand-typed, and I can't guarantee it against typos.
In practice, of course, your software will be testing some condition -- a sensor trip or button press -- and sending out various kinds of MIDI signals with the defaults replaced by, say, a different note number for each button pressed or for the intensity of light falling on a photocell or whatever.
But I need to say something even more basic here. This is, as I stated in my Thompsonian "To be precise;" above, is MIDI output on using the hardware standard.
That is, this is how to send a signal from an Arduino that will go down a MIDI cable, into the MIDI IN connector on a MIDI device, and be interpreted as a command.
Which is to say; the above will play a note on a keyboard or a drum machine, can be used to trigger a cue in Qlab (given a MIDI input connected to the host computer), and may do strange things with a lighting console or sound board or DJ laser display that has a MIDI input jack.
And lastly, this is not the simplest option. The simplest option is a MIDIsense board from Lady Ada, or a MIDI shield from SparkFun Electronics. Both of these options come with tested libraries in addition to the complete hardware. Lady Ada's framework has the advantage of being able to adjust via software on a host computer the sensitivity and response curve of sensors you attach to the board. Plus it even includes a 9V battery compartment for on-the-road use! The SparkFun board is cheaper and faster to set up, and includes buttons and pots to mess around with. Both, of course, include the opto-isolator of a MIDI-in port -- which is the subject for a later entry here.
Part II is here:
Location, Location, Location
I've got a couple of challenging shows coming up that are making me rethink the old "A B" paradigm of vocal reinforcement. And that has also led me to review some of what I think I know about speaker and microphone placement.
First off, let's re-iterate; location matters. In microphones, if you put the right mic in the right place your mix will be almost done. The wrong mic in the wrong place? You mix is all but done for. In reality, of course, the position you wanted is occupied by a music stand, or the mic just isn't in inventory. And in the middle of the show the bass forgets to plug in, the tripod starts to droop, and the conductor kicks out a cable while striding to the podium. And you end up having to do horrible, horrible things with EQ just to try to eke some semblance of sound from the wrong mic that's in the wrong place.
Oh, and the one caveat is drums. Most instruments, if you picked the right mic and placed it right, it will sound good when you fire it up. You may do a little gentle EQ to taste -- or more if you are having trouble seating it in the mix. Drums are among the exceptions; there, the expected sound is an artificial construct made of very close mics with savage EQ and all sorts of funny processing (companding helps a kick a LOT). Of course, you can get a wonderful drum sound with a single overhead, or a distant pair. The book trick is to have both of your pair equidistant from the snare, as that is the loudest mid-range element and the one where phase cancellation will show up the most.
In the case of wireless microphones, the forehead at the hairline is a natural sound (slightly thin and distant, but very real sounding.) The cheek position, from ear all the way to corner of the mouth, require drastic EQ to make them sound good. They also pick up a lot more mouth noise, breath noise, and handling noise. The lapel position is almost the worst of them. Far enough down, it will be fairly natural (with a huge shadow in the EQ from the cavity under the chin) but it also shifts level with every head movement. The higher the lapel position gets, the worse it is; those women who show up in turtlenecks or high-collar blouses and try to clip to the neck line demonstrate just what happens to a poor microphone when it goes deep within the shadow of the chin. It sounds a bit like the speaker is inside a 55 gallon plastic storage drum.
The "right position" and the "right mic," of course, depends on the style of music, the style of the performer, and the needs of the mix -- whether, for instance, you are reinforcing a live band, or whether you are trying to do a recording session.
Classical violin, for instance, is best looking down from several feet above the face of the instrument. The same instrument played as a folk music fiddle is mic'd much closer. And you may chose to go more over the bow for more "hair" in the sound, or more over the bridge for a more natural tone. All of these are sculpturing decisions you make on the basis of what the musician sounds like, what the needs of the environment are, and what you have in your kit that day.
I'm about to mic a baby grand myself. I'm doing it primarily for recording, but it is before a live audience and that introduces constraints. It is set up right beside a drummer so that is a additional (large!) constraint. I am also a little unsure of the sound I want just yet; the group is oriented towards classical gospel and jazzy choral arrangements but what I heard in rehearsal from the piano was more straight-up classical piano. But with a very light hand. I look forward to seeing where she goes when she's behind the wheel of a baby grand (it could be very, very different from what I heard in rehearsal).
I'm also constrained on channels, and even more on available microphones. So I'm thinking strongly of trying a pair of small diaphragm condensers fairly tight in (I'm assuming I'll at least get the lid on long stick -- short stick will make this even harder). AT Pro37 on the right hand, about 6" back from the hammers and tipped towards the hammers as needed, and Shure PG81 (it's an SM81 at a cheaper price) over the bass strings, probably right at the cross, and tipped to almost 45 degrees towards the front of the piano. It's a variation on a scheme I've used before with some success.
I'll also get a fair amount of piano bleed in the omni condensor I'm sticking in the middle of the orchestra. And of course I have an ambient pair set up out in the audience -- a pair of old Oktava MK-012's is all I have available but at least there's a cute little ORTF bar to stick them on.
* * * *
Back from the gig. The piano mics didn't work as well as I'd hoped. It is a 5' baby grand with the lid on short stick. Not a lot of room to get in there, and the drums are right beside it. The piano sounded okay in what came through the leakage of the choral mics, though, so it isn't exactly critical to mic it for this show.
On the other hand the MK-012's on ORTF bar, up a full 12' on the sturdiest tripod I had, were very nice.
But back to location. The purpose of this blog entry is to talk about speaker location.
Speaker location for theater has two goals, goals which are largely orthogonal. The first is the "flat field"; bringing music and vocals to every member of the audience at acceptable volume and clarity. Since as FOH mixer you are basically stuck in one spot through the show, it helps the audience a great deal if most of the seats are hearing the same thing you are hearing. So you are trying very hard not to have the seats on the left hear more brass, the seats in the middle front hear more high end, and the seats in the rear of the house hear everything far too soft.
The other is placement of sounds, and for special effects particularly, what the Walter Murch coined as "Worldizing."
Take this last. To get a sound effect that sounds like it is coming from the hallway, put a speaker in the hallway. And even if you are recording; if you want a sound to sound like it happened in a bathroom, record or re-record it in a bathroom. The aim is to capture those subtle interactions that shape the perception of a space. And in the case of a theatrical setting, these subtle cues as a sound bounces around and filters out of an actual space on the stage will help make the sound believable.
Here's a simple example. Want a sound outside the windows? Don't stick a speaker facing the audience. Stick it on its back below the windows. The sound will bounce around and filter into the space.
The placement trick that started this essay, though, is in regards to vocal reinforcement.
I've had it work very well. For a production of "Master Class" I had the actress on a wireless mic for the memory sequences, and sent that to the house speakers. The sounds she remembered, of herself singing at la Scala, were played back from a pair of speakers in the wings aiming at and bouncing off a full-stage rear projection screen showing scenes from the opera house. The result was both placement, and extremely good isolation; the physical distance and the difference in sound qualities made it easy for the ear to focus on the speaker even as the singer was going all-out like only Callas can go.
The arrangement at many Broadway houses is an A-B system (or, most often, A-B-C). In the case of the pure A-B, the intention is to reduce the flanging effect of two open wireless microphones in close proximity. Thus each mic is routed to a completely different signal chain (bus, amplifier, speaker). As the show progresses actors are rotated into whichever group provides the fewest encounters.
What we do in smaller theaters is just the "C" part; the signal chain of orchestra and singer are different. Often, the orchestra is sent to a proscenium pair that provides some semblance of stereo imaging. The vocals are sent to a center cluster in mono. Again, the physical separation leverages the human ability to focus on one sound in exclusion of competing sounds, as long as there are sufficient cues to allow it to do so -- in frequency, time, or spacialization.
But the situation in a small house is not that simple. In a small house, there is direct acoustic energy from the stage. In one way this is your friend; with a little digital delay inserted into your reinforcement chain, the Haas Effect (also known as the Precedence Effect) helps the listener to localize the singer based on the first sound wave to hit them; the direct acoustic energy from the singer's mouth. If the system is set just right, you can get a good 10 dB of gain on that singer without it even being perceptible that they have a microphone (the human brain is very good at masking this sort of reinforcement from conscious perception).
The flip side is that the orchestra is also putting out direct sound, and enough to spoil any attempt towards localizing them in the house speakers. In fact, the band is putting out so much sound all by themselves you often have only a few bits and pieces in those speakers. And it gets worse if you try to ride the fader a little; all you do is change the levels of two or three instruments, throwing off the mix and the spacial placement all at the same time.
Plus, there's monitors. So another whole hunk of the volume in the audience is reflected sound from monitors, and you can't turn those down without hurting the singers and dancers ability to follow the music.
So, in the small house, your technically perfect (A/B)-C system ends up being different for every seat, and different from loud song to soft song as well.
So here's what I'm going to try over the next couple of shows. The first test case is a young cast and the band is just a small combo on stage. I'm going to eschew any band to the mains at all. I'm going to run a full band mix to monitors, but even for what would be the softer instruments -- such as electronic keyboard -- I'll provide them with band monitors so the sound level of each instrument is equal in the pit as well.
My aim is that the primary source will be either direct acoustics or the band's own monitors. Aka the orchestra itself will be the perceived source. Then I'll boost that subtly for the actors. If there is a wide dynamic range in the performance (soft solos versus large chorus numbers) I'll ride the monitor level by ear so as to raise the monitors for the benefit of the cast during the louder numbers.
The only thing the audience will hear is leakage. And there will be a lot of that, so I'm not worried on that account!
In the meantime I'll create a contoured reinforcement field using all available speakers; a mono system that is stronger towards the stage and tapers off pseudo-acoustically towards the rear of the house. With luck and tweaking I should be able to make the taper of this system similar to the taper of the band sound, thus maintaining the same mix ratio for all seats.
In the show following that, a 180 of sorts. It is going to be a loud show, rock oriented, with a semi-covered pit. I'm going to mic the orchestra and run it and the vocals hot. As a single mix; as if a rock song (the vocals will be panned center, of course.)* As much as is possible I'll run the reinforcement hot enough so it becomes a flat field out to the back of the house. The only downside to this is it will remove all localization cues from the actors themselves except for people sitting in the very front rows. Anywhere else, the actors will be heard almost entirely artificially, over the sound system. Basically I'm going to treat the place as if it was a 6,000 seat house and there was no direct stage sound.
* Stereo is a tough concept in typical theater settings. Except for a narrow aisle down the middle, most of the audience will be seated closer to one speaker than the other. Many will be so far on one side of the proscenium or the other the far speaker barely reaches them. So a hard-panned instrument or effect will be loud in one side of the audience, and unheard by the other. You can get away with this in effects, but it is murder on a mix. Imagine, if you will, half the audience hearing only the flutes and violins, the other half hearing only the brass and the cellos. Or one half hearing the right hand of the piano and the other half hearing the left.
Usually you reduce the stereo image; you avoid hard panning. If, on the other hand, you can score up a center cluster, then almost all of the audience is restored to hearing from two speakers...it is just that one will have the whole mix, the other will have only half of it. And, again, half your audience hears a different band than the other half.
The temptation arises to pan a singer as they walk across stage. Trouble is, you are reinforcing them. That is; you've already decided their voice isn't loud enough for the audience. So by choosing to pan, you are adding more volume to the audience that already had it, and reducing volume to the half of the audience that's already further away from the singer. Not good.
First off, let's re-iterate; location matters. In microphones, if you put the right mic in the right place your mix will be almost done. The wrong mic in the wrong place? You mix is all but done for. In reality, of course, the position you wanted is occupied by a music stand, or the mic just isn't in inventory. And in the middle of the show the bass forgets to plug in, the tripod starts to droop, and the conductor kicks out a cable while striding to the podium. And you end up having to do horrible, horrible things with EQ just to try to eke some semblance of sound from the wrong mic that's in the wrong place.
Oh, and the one caveat is drums. Most instruments, if you picked the right mic and placed it right, it will sound good when you fire it up. You may do a little gentle EQ to taste -- or more if you are having trouble seating it in the mix. Drums are among the exceptions; there, the expected sound is an artificial construct made of very close mics with savage EQ and all sorts of funny processing (companding helps a kick a LOT). Of course, you can get a wonderful drum sound with a single overhead, or a distant pair. The book trick is to have both of your pair equidistant from the snare, as that is the loudest mid-range element and the one where phase cancellation will show up the most.
In the case of wireless microphones, the forehead at the hairline is a natural sound (slightly thin and distant, but very real sounding.) The cheek position, from ear all the way to corner of the mouth, require drastic EQ to make them sound good. They also pick up a lot more mouth noise, breath noise, and handling noise. The lapel position is almost the worst of them. Far enough down, it will be fairly natural (with a huge shadow in the EQ from the cavity under the chin) but it also shifts level with every head movement. The higher the lapel position gets, the worse it is; those women who show up in turtlenecks or high-collar blouses and try to clip to the neck line demonstrate just what happens to a poor microphone when it goes deep within the shadow of the chin. It sounds a bit like the speaker is inside a 55 gallon plastic storage drum.
The "right position" and the "right mic," of course, depends on the style of music, the style of the performer, and the needs of the mix -- whether, for instance, you are reinforcing a live band, or whether you are trying to do a recording session.
Classical violin, for instance, is best looking down from several feet above the face of the instrument. The same instrument played as a folk music fiddle is mic'd much closer. And you may chose to go more over the bow for more "hair" in the sound, or more over the bridge for a more natural tone. All of these are sculpturing decisions you make on the basis of what the musician sounds like, what the needs of the environment are, and what you have in your kit that day.
I'm about to mic a baby grand myself. I'm doing it primarily for recording, but it is before a live audience and that introduces constraints. It is set up right beside a drummer so that is a additional (large!) constraint. I am also a little unsure of the sound I want just yet; the group is oriented towards classical gospel and jazzy choral arrangements but what I heard in rehearsal from the piano was more straight-up classical piano. But with a very light hand. I look forward to seeing where she goes when she's behind the wheel of a baby grand (it could be very, very different from what I heard in rehearsal).
I'm also constrained on channels, and even more on available microphones. So I'm thinking strongly of trying a pair of small diaphragm condensers fairly tight in (I'm assuming I'll at least get the lid on long stick -- short stick will make this even harder). AT Pro37 on the right hand, about 6" back from the hammers and tipped towards the hammers as needed, and Shure PG81 (it's an SM81 at a cheaper price) over the bass strings, probably right at the cross, and tipped to almost 45 degrees towards the front of the piano. It's a variation on a scheme I've used before with some success.
I'll also get a fair amount of piano bleed in the omni condensor I'm sticking in the middle of the orchestra. And of course I have an ambient pair set up out in the audience -- a pair of old Oktava MK-012's is all I have available but at least there's a cute little ORTF bar to stick them on.
* * * *
Back from the gig. The piano mics didn't work as well as I'd hoped. It is a 5' baby grand with the lid on short stick. Not a lot of room to get in there, and the drums are right beside it. The piano sounded okay in what came through the leakage of the choral mics, though, so it isn't exactly critical to mic it for this show.
On the other hand the MK-012's on ORTF bar, up a full 12' on the sturdiest tripod I had, were very nice.
But back to location. The purpose of this blog entry is to talk about speaker location.
Speaker location for theater has two goals, goals which are largely orthogonal. The first is the "flat field"; bringing music and vocals to every member of the audience at acceptable volume and clarity. Since as FOH mixer you are basically stuck in one spot through the show, it helps the audience a great deal if most of the seats are hearing the same thing you are hearing. So you are trying very hard not to have the seats on the left hear more brass, the seats in the middle front hear more high end, and the seats in the rear of the house hear everything far too soft.
The other is placement of sounds, and for special effects particularly, what the Walter Murch coined as "Worldizing."
Take this last. To get a sound effect that sounds like it is coming from the hallway, put a speaker in the hallway. And even if you are recording; if you want a sound to sound like it happened in a bathroom, record or re-record it in a bathroom. The aim is to capture those subtle interactions that shape the perception of a space. And in the case of a theatrical setting, these subtle cues as a sound bounces around and filters out of an actual space on the stage will help make the sound believable.
Here's a simple example. Want a sound outside the windows? Don't stick a speaker facing the audience. Stick it on its back below the windows. The sound will bounce around and filter into the space.
The placement trick that started this essay, though, is in regards to vocal reinforcement.
I've had it work very well. For a production of "Master Class" I had the actress on a wireless mic for the memory sequences, and sent that to the house speakers. The sounds she remembered, of herself singing at la Scala, were played back from a pair of speakers in the wings aiming at and bouncing off a full-stage rear projection screen showing scenes from the opera house. The result was both placement, and extremely good isolation; the physical distance and the difference in sound qualities made it easy for the ear to focus on the speaker even as the singer was going all-out like only Callas can go.
The arrangement at many Broadway houses is an A-B system (or, most often, A-B-C). In the case of the pure A-B, the intention is to reduce the flanging effect of two open wireless microphones in close proximity. Thus each mic is routed to a completely different signal chain (bus, amplifier, speaker). As the show progresses actors are rotated into whichever group provides the fewest encounters.
What we do in smaller theaters is just the "C" part; the signal chain of orchestra and singer are different. Often, the orchestra is sent to a proscenium pair that provides some semblance of stereo imaging. The vocals are sent to a center cluster in mono. Again, the physical separation leverages the human ability to focus on one sound in exclusion of competing sounds, as long as there are sufficient cues to allow it to do so -- in frequency, time, or spacialization.
But the situation in a small house is not that simple. In a small house, there is direct acoustic energy from the stage. In one way this is your friend; with a little digital delay inserted into your reinforcement chain, the Haas Effect (also known as the Precedence Effect) helps the listener to localize the singer based on the first sound wave to hit them; the direct acoustic energy from the singer's mouth. If the system is set just right, you can get a good 10 dB of gain on that singer without it even being perceptible that they have a microphone (the human brain is very good at masking this sort of reinforcement from conscious perception).
The flip side is that the orchestra is also putting out direct sound, and enough to spoil any attempt towards localizing them in the house speakers. In fact, the band is putting out so much sound all by themselves you often have only a few bits and pieces in those speakers. And it gets worse if you try to ride the fader a little; all you do is change the levels of two or three instruments, throwing off the mix and the spacial placement all at the same time.
Plus, there's monitors. So another whole hunk of the volume in the audience is reflected sound from monitors, and you can't turn those down without hurting the singers and dancers ability to follow the music.
So, in the small house, your technically perfect (A/B)-C system ends up being different for every seat, and different from loud song to soft song as well.
So here's what I'm going to try over the next couple of shows. The first test case is a young cast and the band is just a small combo on stage. I'm going to eschew any band to the mains at all. I'm going to run a full band mix to monitors, but even for what would be the softer instruments -- such as electronic keyboard -- I'll provide them with band monitors so the sound level of each instrument is equal in the pit as well.
My aim is that the primary source will be either direct acoustics or the band's own monitors. Aka the orchestra itself will be the perceived source. Then I'll boost that subtly for the actors. If there is a wide dynamic range in the performance (soft solos versus large chorus numbers) I'll ride the monitor level by ear so as to raise the monitors for the benefit of the cast during the louder numbers.
The only thing the audience will hear is leakage. And there will be a lot of that, so I'm not worried on that account!
In the meantime I'll create a contoured reinforcement field using all available speakers; a mono system that is stronger towards the stage and tapers off pseudo-acoustically towards the rear of the house. With luck and tweaking I should be able to make the taper of this system similar to the taper of the band sound, thus maintaining the same mix ratio for all seats.
In the show following that, a 180 of sorts. It is going to be a loud show, rock oriented, with a semi-covered pit. I'm going to mic the orchestra and run it and the vocals hot. As a single mix; as if a rock song (the vocals will be panned center, of course.)* As much as is possible I'll run the reinforcement hot enough so it becomes a flat field out to the back of the house. The only downside to this is it will remove all localization cues from the actors themselves except for people sitting in the very front rows. Anywhere else, the actors will be heard almost entirely artificially, over the sound system. Basically I'm going to treat the place as if it was a 6,000 seat house and there was no direct stage sound.
* Stereo is a tough concept in typical theater settings. Except for a narrow aisle down the middle, most of the audience will be seated closer to one speaker than the other. Many will be so far on one side of the proscenium or the other the far speaker barely reaches them. So a hard-panned instrument or effect will be loud in one side of the audience, and unheard by the other. You can get away with this in effects, but it is murder on a mix. Imagine, if you will, half the audience hearing only the flutes and violins, the other half hearing only the brass and the cellos. Or one half hearing the right hand of the piano and the other half hearing the left.
Usually you reduce the stereo image; you avoid hard panning. If, on the other hand, you can score up a center cluster, then almost all of the audience is restored to hearing from two speakers...it is just that one will have the whole mix, the other will have only half of it. And, again, half your audience hears a different band than the other half.
The temptation arises to pan a singer as they walk across stage. Trouble is, you are reinforcing them. That is; you've already decided their voice isn't loud enough for the audience. So by choosing to pan, you are adding more volume to the audience that already had it, and reducing volume to the half of the audience that's already further away from the singer. Not good.
Thursday, December 8, 2011
Earworms
..Don't change a hair for me, not if you care for me... I'm getting burnt out on musicals. When I was designing lights and scenery more of my work was in straight plays (as well as trade shows and the like). But since I went mostly to the Sound side of the aisle ...there's a hole in the world like a great black pit... I've been working mostly musicals (with the odd live music show or graduation ceremony to break it up).
..Doors and windows, open and close... Over an average year I work -- as in actually sitting at the board mixing for each and every performance -- six to eight full musicals. ...and that's my new philosophy... The last one of those ...Easy Street; that's where we're going to be... was thirty-four performances. Add rehearsals and brush-up and dance calls ...these things you can not dispute, banana is the funniest fruit... and I hear each song through a good fifty times.
And this is mixing, mind you. ...high on the hill is a big old house with something dead inside it. Spirits walk the halls at night and make no effort now to hide it... So each and every song I'm listening intently, listening for any drop-out, trying to achieve a good vocal blend, ..fixing the roof, and raking the hay, is not my idea of a perfect day... and of course keeping track of where I am in the song so I can anticipate the chorus entrance and other fader moves. ...So I'm the king of the jungle... You can bet those songs get stuck in my head!
Add to this a good dozen shows I am consulting for, helping to load in, set up, ...you can do it, you can do it if you try... and at least a half-dozen music shows which inevitably have a few songs from musicals ...here's to the ladies who lunch... in them. ...my time of day...
I am right now ...a paradox, a paradox... meeting on four shows ...there's a long thing winding staircase without any bannisters... and of course already listening to the music for those. ...never met a man I didn't like...
So not only is there plenty of opportunity ...notice me, Horton... to get songs from one show stuck in my head ...look at me, way up high... the fact that I'm somewhere in the process of multiple shows at once ...the Bronx is up and the Battery down... means the jukebox of my head gets rather crowded. ...everybody wants to be a cat...
I'm getting a little tired of it. ...the cow as white as milk the hair as yellow as silk... Pushing the faders on the board is a nice little combination ...where shining shine benignly drips... of constant tension and the cruel understanding that my best efforts will still not bring out the music the way I want it to be brought out. ...you got to know the territory... And I'm getting particularly bored ...those magic changes... with pulling transmitters out of sweaty mic bags and dressing them with fresh batteries and condoms (and bits of electrical tape and moleskin). ...how did you get to be you, Mr. Shepard...
Still ...Who's that hiding, in the tree top... I've got a pretty good gig now. It's not resident designer, ...they don't turn their head as they see me walk by... but it is as close to a steady gig as you'll ever get as a freelancer. So it's off to learn yet another set of songs ...what a glorious feeling, I'm happy again... until they, too, get stuck in my head.
Did I mention I work some of these shows more than once? I just closed my third "Annie," and not long before that was my fifth or sixth "Wizard of Oz," I also did "Producers" twice in one year, I've got my second or third "Grease" coming up, also did my second "Seussical" and my second "On the Town."
On the other hand, also was involved in "Merrily we Roll Along," "Into the Woods," "Pippin," and a couple of other surprisingly big standards for the first time. So there is always something new as well as something familiar, something peculiar, something......oh, darn it! And there I go again....
..Doors and windows, open and close... Over an average year I work -- as in actually sitting at the board mixing for each and every performance -- six to eight full musicals. ...and that's my new philosophy... The last one of those ...Easy Street; that's where we're going to be... was thirty-four performances. Add rehearsals and brush-up and dance calls ...these things you can not dispute, banana is the funniest fruit... and I hear each song through a good fifty times.
And this is mixing, mind you. ...high on the hill is a big old house with something dead inside it. Spirits walk the halls at night and make no effort now to hide it... So each and every song I'm listening intently, listening for any drop-out, trying to achieve a good vocal blend, ..fixing the roof, and raking the hay, is not my idea of a perfect day... and of course keeping track of where I am in the song so I can anticipate the chorus entrance and other fader moves. ...So I'm the king of the jungle... You can bet those songs get stuck in my head!
Add to this a good dozen shows I am consulting for, helping to load in, set up, ...you can do it, you can do it if you try... and at least a half-dozen music shows which inevitably have a few songs from musicals ...here's to the ladies who lunch... in them. ...my time of day...
I am right now ...a paradox, a paradox... meeting on four shows ...there's a long thing winding staircase without any bannisters... and of course already listening to the music for those. ...never met a man I didn't like...
So not only is there plenty of opportunity ...notice me, Horton... to get songs from one show stuck in my head ...look at me, way up high... the fact that I'm somewhere in the process of multiple shows at once ...the Bronx is up and the Battery down... means the jukebox of my head gets rather crowded. ...everybody wants to be a cat...
I'm getting a little tired of it. ...the cow as white as milk the hair as yellow as silk... Pushing the faders on the board is a nice little combination ...where shining shine benignly drips... of constant tension and the cruel understanding that my best efforts will still not bring out the music the way I want it to be brought out. ...you got to know the territory... And I'm getting particularly bored ...those magic changes... with pulling transmitters out of sweaty mic bags and dressing them with fresh batteries and condoms (and bits of electrical tape and moleskin). ...how did you get to be you, Mr. Shepard...
Still ...Who's that hiding, in the tree top... I've got a pretty good gig now. It's not resident designer, ...they don't turn their head as they see me walk by... but it is as close to a steady gig as you'll ever get as a freelancer. So it's off to learn yet another set of songs ...what a glorious feeling, I'm happy again... until they, too, get stuck in my head.
Did I mention I work some of these shows more than once? I just closed my third "Annie," and not long before that was my fifth or sixth "Wizard of Oz," I also did "Producers" twice in one year, I've got my second or third "Grease" coming up, also did my second "Seussical" and my second "On the Town."
On the other hand, also was involved in "Merrily we Roll Along," "Into the Woods," "Pippin," and a couple of other surprisingly big standards for the first time. So there is always something new as well as something familiar, something peculiar, something......oh, darn it! And there I go again....
Friday, December 2, 2011
Sound Design Signpost
Links to past posts on Sound Design:
Wireless Microphones:
How to Make a Microphone Bag
Microphone Positions Reviewed: Where and How to Place Wireless Mics on Actors
Mic Station and Medical Dispensary: Tapes, Markers, and Other Helpful Supplies
The Kids Speak Out: Tricks for Wireless Mics in Childrens' Theater
I Hate Wireless Mics: Why Wireless Microphones are not a Panacea
A Few Simple Rules for Wireless Microphones
The Basics of Mic'ing a Cast: The Breakdown
The Basics of Mic'ing a Cast: Frequencies and RF Path
The Basics of Mic'ing a Cast: Putting the Mic on the Actor
The Basics of Mic'ing a Cast: Mixing (EQ and Compression)
The Two Nations of Sound Design: a Wireless Mic Reinforcement Philosophy
Wireless Microphones:
How to Make a Microphone Bag
Microphone Positions Reviewed: Where and How to Place Wireless Mics on Actors
Mic Station and Medical Dispensary: Tapes, Markers, and Other Helpful Supplies
The Kids Speak Out: Tricks for Wireless Mics in Childrens' Theater
I Hate Wireless Mics: Why Wireless Microphones are not a Panacea
A Few Simple Rules for Wireless Microphones
The Basics of Mic'ing a Cast: The Breakdown
The Basics of Mic'ing a Cast: Frequencies and RF Path
The Basics of Mic'ing a Cast: Putting the Mic on the Actor
The Basics of Mic'ing a Cast: Mixing (EQ and Compression)
The Two Nations of Sound Design: a Wireless Mic Reinforcement Philosophy