Sponsors

Showing posts with label EQ. Show all posts
Showing posts with label EQ. Show all posts

Thursday, September 4, 2008

Prepping A Vocal For The Mix - EQ Magazine

| October, 2007

As far as I’m concerned, the vocal is the most important part of a song. It’s the conversation that forms a bond between performer and listener, and the focus to which other instruments give support.

And that’s why you must handle vocals with kid gloves. Too much pitch correction removes the humanity from a vocal, and getting overly aggressive with composite recording (the art of piecing together a cohesive part from multiple takes, and the subject of a future Vocal Cords) can destroy the continuity that tells a good story. Even too much reverb or EQ can mean more than bad sonic decisions, as these can affect the vocal’s emotional dynamics. But you also want to apply enough processing to make sure you have the finest, cleanest vocal foundation possible — without degrading what makes a vocal really work. And that’s why we’re here.

Vocals are inherently noisy. You have mic preamps, low-level signals, and significant amounts of amplification. Furthermore, you want the vocalist to feel comfortable, and that can lead to problems, as well. For example, I prefer not to sing into a mic on a stand unless I’m playing guitar at the same time. I want to hold the mic, which means mic-handling noise is a possibility. Pop filters are also an issue — as some engineers don’t like to use them — but they may be necessary to cut out low-frequency plosives. In general, I think you’re better off placing fewer restrictions on the vocalist, and having to fix things in the mix, rather than having the vocalist think too hard about, say, mic handling. A great vocal performance with a small pop or tick trumps a boring, but perfect, vocal.

Okay, now let’s prep that vocal for the mix.

REMOVE HISS

The first thing I do with a vocal is turn it into one long track that lasts from the start of the song to the end, then bounce it to disk for bringing into a digital audio editing program. Despite the sophistication of host software, with a few exceptions (Adobe Audition and Samplitude come to mind), we’re not quite at the point where a multitrack host can always replace a solid digital-audio editor.

Once the track is in the editor, the first stop is generally noise reduction. Sound Forge, Adobe Audition, and Wavelab have excellent built-in noise reduction algorithms, but you can also use stand-alone programs such as Diamond Cut 6. Choose a noise reduction algorithm that takes a “noiseprint” of the noise, and then subtracts it from the signal. Using this simply involves finding a portion of the vocal that consists only of hiss, saving that as a reference sample, then instructing the program to subtract anything with the sample’s characteristics from the vocal (Figure 1).
There are two cautions, though. First, make sure you sample the hiss only. You’ll need only a hundred milliseconds or so. Second, don’t apply too much noise reduction. About 6dB to10dB should be enough — for reasons that will become especially obvious in the next section. Otherwise, you may remove parts of the vocal itself, or add artifacts, both of which contribute to artificiality. Removing hiss makes for a much more open vocal sound that also prevents “clouding” the other instruments.

DELETE SILENCES

Now that we’ve reduced the overall hiss level, it’s time to delete all the silent sections between vocal passages. If you do this, the voice will mask hiss when it’s present, and when there’s no voice, there will be no hiss at all (also see the Power App Alley in this issue on Sonar 6, which describes how to reclaim disk space when removing silence).

With all programs, you start by defining the region you want to remove. From there, different programs handle creating silence differently. Some will have a “silence” command that reduces the level of the selected region to zero. Others will require you to alter level, like reducing the volume by “-Infinity” (Figure 2). Furthermore, the program may introduce a crossfade between the processed and unprocessed section, thus creating a less abrupt transition. If it doesn’t, you’ll probably need to add a fade-in from the silent section to the next section, and a fade-out when going from the vocal into a silent section.

REDUCE BREATHS AND ARTIFACTS

I feel that breath inhales are a natural part of the vocal process, and it’s a mistake to use hard disk recording to get rid of these entirely. For example, an obvious inhale cues the listener that the subsequent vocal section is going to “take some work.”

That said, applying any compression later on will bring up the levels of any vocal artifacts, possibly to the point of being objectionable. I use one of two processes to reduce the level of artifacts.

The first option is to simply define the region with the artifact, and reduce the gain by 3dB to 6dB (Figure 3). This will be enough to retain the essential character of an artifact, but make it less obvious compared to the vocal.

The second option is to again define the region, but this time, apply a fade-in (Figure 4). This also may provide the benefit of fading up from silence if silence precedes the artifact.
Mouth noises can be problematic, as these are sometimes short, “clicky” transients. In this case, you can sometimes cut just the transient, and paste some of the adjoining signal on top of it (choose an option that mixes the signal with the area you removed; overwriting might produce a discontinuity at the start or end of the pasted region).

PHRASE-BY-PHRASE NORMALIZATION

A lot of people rely on compression to even out a vocal’s peaks. That certainly has its place, but there’s something you need to do first: Phrase-by-phrase normalization. Unless you have the mic technique of a k.d. lang, the odds are excellent that some phrases will be softer than others. If you apply compression, the lower-level passages might not be affected very much, whereas the high-level ones will sound squashed. It’s better to get the entire vocal to a consistent level first, before applying any compression. This will retain more overall dynamics. If you need to add an element of expressiveness later on (e.g., the song gets softer in a particular place, so you need to make the vocal softer), you can do this with judicious use of automation.

Referring to Figure 5, the upper waveform is the unprocessed vocal, and the lower waveform shows the results of phrase-by-phrase normalization. Note how the level is far more consistent in the lower waveform.

However, be very careful to normalize entire phrases. You don’t want to get so involved in this process that you start normalizing, say, individual words. Within any given phrase there will be a certain internal dynamics, and you definitely want to retain them.

ARE WE PREPPED YET?

DSP is a beautiful thing. Now our vocal is cleaner, of a more consistent level, and it has any annoying artifacts tamed — all without reducing any natural qualities the vocal may have. At this point, you can start doing more elaborate processes, such as pitch correction (but please, apply it sparingly and rarely!), EQ, dynamics control, and reverb. But, as you add these, you’ll be doing so on a much firmer foundation.

Sunday, August 3, 2008

New SoundClick Widget: Live Ozarks Bluegrass




I am the live audio engineer for a bluegrass band which regularly plays at the historic Star Theater in Willow Springs, Missouri. I occasionally bring my recording gear and simply capture the outgoing audio signal from the soundboard (before it gets further processed by the EQ). If I enjoyed the show or a few songs, I will edit the songs at home and burn a few CDs for the band members. I might even post the songs up on SoundClick and blog about it as well.

I put the SoundClick widget for those recordings up on this website today. I do not play on any of these recordings. I simply run the live sound and capture the outgoing audio from the soundboard. No extra signal processing, EQing... nothing. These are RAW sound files.

I will do some slight audio editing in Audacity and perhaps a very slight amount of EQ work. That is all I do to these files. Then I upload them to SoundClick and I'm done.

Saturday, April 19, 2008

EQ Magazine: Managing Multisamples With SFZ

Key Issues: Managing Multisamples With SFZ

| April, 2008

The following may seem techy, and, frankly, that techy aspect inhibited me from checking out the SFZ file format. But once I finally wrapped my head around the concept, I was glad I did.
The SFZ file format—a license-free spec, even for commercial purposes—was created by synth designer Rene Ceballos, and it defines how multisamples should be handled within an SFZ-compatible instrument. The format is compatible with several Cakewalk instruments, including Dimension, Session Drummer 2, Rapture, and DropZone. But it’s also compatible with the free, VST-compatible SFZ Player that works in any VSTi-compatible host (download the Player at www.project5.com/products/instruments/sfzplayer/default.asp), so the format’s usefulness extends far beyond Cakewalk instruments.

For example, suppose you work with Samplitude, you’re collaborating with a friend who uses Cubase, and you have a bunch of “found sound” samples you want to use as rhythmic elements. If you create an SFZ file of these sounds, and you both download the free SFZ Player, you can exchange keyboard parts that trigger these samples in the SFZ Player. What’s more, the SFZ format accommodates Ogg Vorbis (compressed) files, so you can use really big files, but compress them for faster file transfers over the net. When it’s time to mix down, simply change the SFZ file to reference the original WAV files instead of the compressed ones.

SFZ BASICS

You can think of the SFZ format as being similar to SoundFonts, but an SFZ file has two components instead of one: a collection of samples (typically stored in a folder), and a text-based definition file that describes what to do with that collection of samples. You can create an SFZ definition file in any simple word processor such as Notepad.

For example, suppose you sampled a Minimoog at the F key for every octave over a five-octave range (F1, F2, F3, F4, and F5). You can then create an SFZ file that references these waveforms, and describes the root key of each waveform, as well as the keyboard range each waveform should cover.

But those are just the basics. The SFZ format can also specify detuning, transposition, filtering, envelopes, sample start time, looping, and many other characteristics. Waveforms can overlap, and you can define as many waveforms as you want in an SFZ file. It’s therefore possible to specify a complete instrument using SFZ, and if you load that SFZ file into an SFZ-compatible instrument, it will play back exactly as you intended.

A PRACTICAL EXAMPLE

Creating an SFZ definition file requires some programming chops, but, fortunately, the commands are pretty simple and musician friendly. To elaborate further on the Minimoog example mentioned above, I recently created a multi-sampled set of Minimoog waveforms suitable for loading into SFZ-compatible synths. I sampled the notes at consistent notes for the various waveforms, and gave them consistent names (SawF1.WAV, SawF2.WAV, TriangleF1.WAV, and so on). I stored all the samples in a folder titled Minimoog Waveforms, then created an SFZ definition file for the sawtooth wave samples that defined the note range covered by each sample. Once I created that file, creating another SFZ file for the triangle wave samples simply involved doing a find on “Saw,” replacing each instance with “Triangle,” and then saving the file under a different name (MinimoogTriangle.sfz). I did the same thing for the Pulse, Square, and other waveforms.

Once these SFZ files were done, I could load one into Rapture. The multisampled collection of waveforms then became a single “element” within Rapture (think of an element as roughly equivalent to a voice). I’ve also created multisamples with guitar notes, drum sounds, effects, and various other sounds.

The main “unit” of an SFZ definition file is the region. Here’s the syntax for creating a simple region:

pitch_keycenter=F1 lowkey=C0 hikey=C2 Sample=Minimoog Waveforms\SawF1.wav

This says that the sample being used has a root key of F1, and should cover the key range of C0 up to and including C2. To reference where the sample comes from, an added “Sample” designation points to the “Minimoog Waveforms” folder and after a backslash, specifies the file name within the folder.

Creating regions for the other samples simply involves substituting some different names, root notes, and key-range values. You can also add comments, as long as the line starts with two slashes. Figure 1 shows a file that defines a complete Minimoog sawtooth wave multisample with five samples.

For more information on the SFZ format, including a complete list of commands (opcodes), surf to www.cakewalk.com/DevXchange/sfz.asp, or check out the book Cakewalk Synthesizers by Simon Cann [Thomson Course Technology]. Granted, not everyone will get into programming SFZ files, but I’ve found it to be a tremendously useful format for creating sophisticated multisample collections that can play back in a variety of instruments.

Tuesday, April 15, 2008

MixLine Tip: Save My Snare!

RECORDING TIP: SAVE MY SNARE!
- MixLine, Kevin Becka

A badly recorded snare can often be helped by duplicating it and then treating the duplicates as separately processed members of the same "club." For starters, duplicate your track, either by multing it to a second channel on your console or physically duplicating it in your DAW. One of these dupes will be optimized for punch, while the other will be used to add snap. Alone, they will not have what it takes to flavor your drum mix, but that's the point - it's the combination that will work.

First, bring out the snare's low end on one track with some EQ at 100 to 200 Hz. Remember, this will be the foundation of your track, so don't be afraid to go for punch. Then treat the other track more severely, digging out the transient with a compressor set to a slow attack time (30 to 50 ms) and a fairly fast release (100 to 300 ms). The release time is tempo-dependent, so you can get away with a slower release time on a ballad than you could on an up-tempo song. Try to stay away from the dreaded "pumping," where the compressor gasps for breath in-between hits, bringing up the noise floor unnaturally. Set the EQ to bring out more of the top frequency range of the instrument at 1 to 3 kHz. Once both tracks please your ear, you can mix them accordingly. If you're mixing in a DAW, then make sure your latency is lined up perfectly by using delay compensation or physically correct it by sliding the tracks back by the amount of delay. Most DAWs will let you see how much latency is being introduced by a group of plug-ins. Take that number and move your entire track back to match up with its original position. Keep in mind that one track's latency may not match the others due to differences in plug-ins. - Kevin Becka


Sunday, February 17, 2008

New Track: Follow

I've been mulling around a song idea for several years and am just now getting to complete it. I have laid down what I believe are the essential instrumental tracks now, and have begun the vocal tracks. Interestingly (for me), this song began life inspired by a Delirous song (I don't remember which one), but as it grew and evolved it became something completely different: a musical tribute to some of most influential ideologists (for me):

  • John Lennon (with the whole "give peace a chance" and "all we need is love" thing
  • Rev. Dr. Martin Luther King, Jr. (with the whole "content of your character" vs. "color of your skin" thing)
  • Jesus (with the whole "love your neighbor as yourself", "lose your life to keep it" thing)
Lyrically I hope to show how these men taught some things which are inspirational for me - things I aspire to become / achieve; things that I hope speak to all people everywhere throughout the generations. Musically, it is a tribute to the Beatles' Sgt. Pepper album, AND 90's grunge sound (circa, Bush / Stone Temple Pilots / Pearl Jam / Nirvana).

We'll see if I can pull this off. I have questions as to whether I'm trying to cover too much ground both lyrically and musically. I remember my dad used to say "keep it simple stupid' (and I would laugh hysterically, being a 7 year old boy). I'm afraid I've digressed from that concept on this song. But something inside of me keeps pushing that direction regardless of what my common sense tells me. We'll see if this is a train wreck or a great song.

My Music - The Phos

 
#Google Analytics