Pages

Wednesday, January 18, 2012

Noise

You read any good basic description of musical sound, and it will take you through frequency and amplitude, the idealized sine wave, the harmonic series, the way the harmonics can be combined into a complex wave-form. About, in short, the qualities of pitch and timbre that allow us to recognize the different varieties of musical instruments.

And, yes, there is more to say here. There are the so-called formants; specific emphasized harmonics within the human voice that describe the size and shape of the vocal cavity. Within a single voice, the mix of these harmonics tells you what vowel you are dealing with, whether the mouth and throat are open or whether they are pinched. The difference between a sigh and a scream. Within a range of voices, the shift of location of these harmonics tells you the size of the vocal tract, and our ears are so exquisitely tuned we can tell the difference between a deep-voiced child and a male counter-tenor.

And most of these introductory discussions will stop there, with just a little hand-wave about aperiodic waveforms, non-harmonic overtones, and the entire science of how a tone changes over the lifetime of a musical note.

When you get into synthesizer programming and sample manipulating, you learn that the envelope of a sound is as important as the timbre in characterizing the instrument. And you also learn how most musical notes start their life in a chaos of barely-filtered noise before the violently agitated string or the turbulent airflow in a flute settles into the actual note at hand.

You learn about the concept of ASDR envelope (Attack Sustain Decay Release) and the judicious use of white noise and subtle changes in the pitch center to approximate the evolving waveform.

And if we go further towards trying to create a convincing picture of a real musical instrument though electronic means, we begin thinking about non-musical noise.

I learned this quite early; a good acoustic guitar patch becomes that more realistic and believable to the listener if you add a little string squeak here and there.




Anyhow. Shift focus to real instruments, and the reinforcement/recording chain. All the way through the chain, the primary intent is to limit noise; to reproduce just the musical note, without picking up street noises, without introducing odd-order harmonics, without allowing tape hiss or ground hum to rise above the detectable threshold.

Yet, paradoxically, once we've established that clean signal chain we discover that to get a truly great recording we need to include the noise.

On the instrument side, even the apparently simple piano has a whole-instrument sound that is more than the individual notes. The open strings vibrate in sympathy with the struck strings. The sound board contributes its own harmonics. The total sound of the piano is not just a sequence of piano notes, but the sound of a massive iron frame in a heavy wooden box in an acoustic space.

Even further in this direction, the "sound" of an acoustic guitar is not just the notes being fretted, but also must include the many intentional nuances of technique; the hammer-on and off, the slide, the bend. And what makes it truly sing is the flesh; fingers sliding along strings, tapping the frets and pressing against the soundboard.

Or the saxophone, or the human voice; what makes the sound is not just the notes, not just the nuances demanded by the player, but the biological noise of lip and spit.

To get a really good acoustic performance you have to capture some of that sense of real people in a real space. Even if you are creating the echo and reverb of a complex-shaped room with different surfaces and angles electronically, and adding in audience noise from an effects tape.

Of course this varies across styles. Symphonic recordings preserve just a tiny amount of page changes and chairs squeaking. 80's pop preserves practically nothing as electronically-generated tones go through electronic processing and finally arrive at the recording untouched by human hands or any kind of real environment. And small-combo jazz, or folk, is all about hearing ALL the sounds the instruments make (not just the ones indicated on the score.)




Then of course we have the aspect of desired noise. Of microphones that intentionally have a non-flat response because those little peaks and dips better suit the material (such as the venerable Shure SM58 with that pronounced but sweet +5dB peak around 5KHz.)

Of amplifier and compressor stages that introduce odd-order harmonics into the sound, from the so-called "warmth" of an over-driven tube, to the harsher but still potentially pleasing noise of a FET stage, to the all-out crunch of over-driven tubes and clipped signal diodes.

Even reverb can be looked at as a kind of noise. But it is hardly the only "noise" tool in the box; everything from phasing to the Leslie Speaker effect to the crunchingly distorted bit-choppers.

I created almost forty minutes of music for a production of Agamemnon and practically every track included various kinds of distortion and noise, from a little tube simulation to all-out full tracks of radio static.

Here, "noise" is a design tool.





Which brings us at last to sound design. I realized ruefully during one production that practically all of my cues were varieties of white noise; wind sounds, ocean surf sounds, and the like.

But this is a general truth. More and more, I realize that the defining characteristic of most sound effects is noise. There's worldization (altering the sound to make it appear to be organic to the imagined environment), there's layers of indistinct sound that keep the semiotic content of the cue from being too starkly bare, there's sweetening to add attack or low-frequency content.

Oh, I suppose I should explain those more. In re layers, I long ago decided the best way to loop a sound is not to loop "a" sound. Instead loop two or more. Real rain travels in cells of under 40 minutes -- in our experience, it comes and goes, it spatters and retreats. So a good sound cue to play as a background for a ten-minute scene should also ebb and flow, changing its character.

Too, rain or running water isn't a single object forty feet wide. Across the width of a stage, you would expect to hear a little more water-on-leaves here, a little more water-in-the-gully there. So it makes sense to have more than one sound, to place them around the stereo picture (or around how ever many speakers you have available for the effect!) and to have them evolve over time.

In re worldization, of which I've spoken before, the most basic trick is placing the speaker correctly. Use natural acoustics when you can. And don't ignore the potential of bouncing sound; a speaker pointed away from the audience towards a hard-covered flat will create a sound that appears to emanate from that wall.

In electronic processing, the most basic trick after reverb is to pull out the high frequencies for increasing distance. A simple bit of pitch bending (or a pitch envelope) will do wonders towards establishing movement via the Doppler effect. And don't ignore the pre-delay function on your reverb patches -- this sets more than anything else the psycho-acoustic impression of the size of the space.

The old technique is still a good one; if you have an effect that is supposed to sound like it is in a bathroom, a closet, a car...find a bathroom, a closet, or a car. Set up a good-quality microphone, and a playback speaker. This method is even more useful for getting a nice "over the radio" effect to sound right. But I've even used a variation of it by recording on to cassette tape then recording that back to hard disk.

Sweetening goes all the way back to the concert hall. Remember how the characteristic sound of an instrument is as much about the attack and ADSR envelope as it is about the timbre? Well, arrangers for the symphony started quite early on doing tricks like doubling pizz strings with woodwinds; the result was a strange hybrid instrument, a woodwind with a sharp, brittle attack.

On a recent production, I had to create the sound of a whale sneezing. I used myself as voice actor once again, with suitable processing. But to get the impact of the sneeze I added in a snare drum hit and the bite of a T-rex from a purchased sound effect. I lined up these events in the zoomed-in waveform view of my venerable Cuebase, and played with volume until the individual nature of the additional sounds vanished and all that was left was the additional impact.

In Hollywood, a simple smack to the jaw from Bruce Willis may have a dozen different sounds in it, from purely synthesized white noise to a meat cleaver going through a raw chunk of beefsteak.



Anyhow, the insight today is that most of the sounds used in a sound design are more noise than sound. They are aperiodic, aharmonic. They rarely have defined pitch centers, and even when they do, that pitch is not necessarily in tune with any common temperment. And the most useful parts of these sounds are those that are the least like musical notes; the crackles, the impacts, the hisses...all the bits of messy noise.

It is easier to synthesize a decent clarinet sound than it is to synthesize a lion's roar. This is why we still reach for massive libraries of sound effects instead of trying to create them from scratch electronically. And even when a sound is manipulated and assembled and used out of context, why we picked that original sound is because it is edgy, unpredictable, organic. It has hair on it. It has lip noise and spit. It has noise.

No comments:

Post a Comment