So I'm going to walk through some of the steps I go through in making a sound effect for theater.
The vast majority of effects, I find, are not a single sample. Take something very simple; for my first production of Seussical the Musical we needed the sound effect of a pull-chain light switch. I had one, but the actual recorded sound was too abrupt. So I went into magnified view, found one of the "clicks" of the chain passing through the switch, and copied it several times. Adjusted position and volume to make a crescendo, and pasted the switch click back in.
Take another example. Almost no rain, and no water, effect will quite sound right by itself. You usually want to layer a couple to give the sound more depth, movement, and variety.
Take a third example. For something as simple as a School Bell for Grease I had a decent bell sound but I wanted something dirtier-sounding, older and more primitive. So I layered it with a fire alarm bell.
And then there are the more interesting effects. But in my mind, almost every effect is telling a story. It may be a complex story with several characters, or it may be a very simple story with no real plot. But it is almost always a sequence of events that is taking place.
Even if the sequence is as simple as the links of chain in a pull-chain light going through the hole until the switch itself clicks.
First step: Pull sounds from your libraries, sort them into folders, and purchase or record what you need to fill the gaps.
I have most of the BBC sound effects CDs, and a scattering of other ones. Plus sounds I developed for other shows, and a few things traded and bought and so forth. My libraries are not terribly well organized -- a flaw offset by the sad fact that I have a small set of favorites I end up using over and over anyhow.
Second step: Rough-trim and normalize. This is especially necessary for stuff you record yourself; find the good takes and chop them and save out new individual files.
The example here is the Audacity window from a multi-track music recording session I did recently.
Since we're being basic here, use the waveform view to trim but give yourself a margin -- with a longer margin at the tail (because that is where the natural reverberation lives). Normalization, which is under the "Amplify" heading in Audacity's "Effect" menu, performs an overall amplification of the selection until the highest signal level is at some preset point -- maximum gain by default, but I usually set it to 1-2 dB below max gain to prevent clipping if I add a bunch of equalization.
The advantages of normalization is that all your tracks will have a similar default volume and you can make a rough mix by eye as well as by ear, and the waveform of a normalized track fills the window, making it easier to select edit points by eye. The downside is, besides the potential for clipping mentioned earlier, that you bring up the noise floor exactly as much as you bring up the gain. If you find yourself adding 20dB or more to normalize your tracks, you need to do better work setting your record gain.
Third Step: So I said most effects will have multiple layers. For many effects, one of these layers should be a guide track -- I use recordings from rehearsals for these.
Context is everything. As much as possible I like to have the total picture in the editor -- if there is music in the final setting of the cue, if there is dialog, if another cue is playing, put that sound in the window too. Set it up so you can hear how your sound will seat in the final mix. It is a simple matter to mute these reference tracks just before the final render.
One of the simplest and most useful tool in a DAW or sequencer is being able to draw volume tracks. You can also record fader moves -- and edit them graphically later if you need to. Instead of going through the effort of figuring out exactly how much of a sound you need before hand, import a longer sound file and set fade points by drawing a fader move.
For all those moments when you have to get the timing perfect, here is a little trick; set your sequencer to endless loop and set the loop points to either side of the spot where you need to adjust the timing. Then just keep playing as you adjust.
Fourth Step: Here's where a little mixology comes in. Many of your raw sounds will not be quite right, or not fit smoothly into the mix -- especially if you are asking them to do something unusual. So out with the audio editing tools. These are destructive but conservative; the sequencer creates a copy of the original sound file for each new edit.
The most useful tools in the audio file editing domain are time and pitch manipulation (followed by noise reduction). The modern tools allow you to independently shift pitch or tempo; although a shift of more than 125% either way will start to develop distracting artifacts.
The use of pitch change as a tool ranges from gross to subtle. On the large side, you re-pitch to make a small thing sound large; such as to turn a chirping bird into the cry of a giant roc. Or vice-versa. On the subtle, sometimes a shift of a mere half-step in pitch will help a sound seat better with the sounds around it.
Our ears are exquisitely tuned to cues about the size of the vocal tract. If you re-pitch a vocal up, it will give the impression of being from not just a higher-pitched creature but one with a smaller head. You can't turn an alto into a soprano with a simple pitch change -- you can only change her into a child.
The better pitch changes offer independent control of the vocal formants; meaning you can change the perceived age (size of head for larger numbers) or pitch independently.
For time bending, I frequently shorten airplane and automobile passes. Plays and musicals simply move at too fast a tempo to allow a full twenty seconds for a car to arrive; you need to shorten the time from "just heard" to "motor shut down" however you can; chopping off the start and adding in a fade, doing a tempo manipulation on the audio track, etc.
Tempo changes also changes the feel. I recently did a tempo stretch on a bit of dialog to give the delivery a more deliberate, measured feel.
Oh -- the image here is a fake Doppler Effect using pitch change. For those few that don't know already, when a sound-making object is approaching the sound waves are effectively compressed (each new wave arrives sooner than it would if the object wasn't moving). The result is that the perceived pitch of the approaching object goes up (and the perceived pitch of a retreating object goes down).
The trick to remember is that the greatest change in perceived pitch occurs as the object passes. So you want a curve that approaches vertical as it crosses the zero line, and flattens out on the far ends.
Fifth Step: So this is the sequence; raw sounds and clean-up, assembly, adjustment of audio, and finally the shaping tools for each sound and the total mix; equalization, stereo width, distortion, reverb.
Almost all the contouring tools I use were either bundled with the sequencer or are freeware (a few are shareware). Many DSP companies give away promotional versions with more limited functions -- haunt the audio forums for alerts.
My favorite tools at this moment are: the Fish Fillets from Digital Fish Phones -- a very nice little compressor, expander, and de-esser with an aggressively "warm" sound. On a recent "radio voice" processing most of the distortion was provided by turning up the "saturation" knob on Blockfish all the way.
Eric's Free EQ (no link at the moment, sorry). Also warm and aggressive and very no-frills; three big knobs in sonically useful places.
Vinyl from iZotope -- designed to add a little fake record-player action to a pop track, but at higher levels can help you create a warped scratched old 78.
And of course the amazing open-source mda plug-ins from Smartelextronix; two dozen different useful VST effects with no fancy eye candy, just a full set of parameter dials.
Note of course that here in the VST plug-in world, all parameters from these plug-ins can be automated the same way as a lowly volume or pan. The ability to change reverb depth in the middle of a track can be a real help for nailing down just the right sound.
And don't of course forget channel EQ!
Use effects on a per-track basis, as send effects, as effects on group buses, and as effects on the final master buss -- of these, or a combination of all of the above, can work. Often just a little bit of reverb is all you need to seat the effect.
It is getting late here so I'm not going to go on about world-ization (about using reverb and EQ and distortion to suggest the environment of the sound), but I will add one important note:
Listen to the sound on the speakers it will be played back on during the performance. If you can, bring your sequences to the theater and do the final tweaking of the mix there. If not, perhaps you can take the final rendered tracks and do a little EQ on them while listening to the results live in the space. There are things you can do in playback, and just changing the ratio of speaker assignments or the position of the speakers can do wonders, but being able to mix in the room is almost beyond prince.
No comments:
Post a Comment