Sponsors

Thursday, September 4, 2008

Prepping A Vocal For The Mix - EQ Magazine

| October, 2007

As far as I’m concerned, the vocal is the most important part of a song. It’s the conversation that forms a bond between performer and listener, and the focus to which other instruments give support.

And that’s why you must handle vocals with kid gloves. Too much pitch correction removes the humanity from a vocal, and getting overly aggressive with composite recording (the art of piecing together a cohesive part from multiple takes, and the subject of a future Vocal Cords) can destroy the continuity that tells a good story. Even too much reverb or EQ can mean more than bad sonic decisions, as these can affect the vocal’s emotional dynamics. But you also want to apply enough processing to make sure you have the finest, cleanest vocal foundation possible — without degrading what makes a vocal really work. And that’s why we’re here.

Vocals are inherently noisy. You have mic preamps, low-level signals, and significant amounts of amplification. Furthermore, you want the vocalist to feel comfortable, and that can lead to problems, as well. For example, I prefer not to sing into a mic on a stand unless I’m playing guitar at the same time. I want to hold the mic, which means mic-handling noise is a possibility. Pop filters are also an issue — as some engineers don’t like to use them — but they may be necessary to cut out low-frequency plosives. In general, I think you’re better off placing fewer restrictions on the vocalist, and having to fix things in the mix, rather than having the vocalist think too hard about, say, mic handling. A great vocal performance with a small pop or tick trumps a boring, but perfect, vocal.

Okay, now let’s prep that vocal for the mix.

REMOVE HISS

The first thing I do with a vocal is turn it into one long track that lasts from the start of the song to the end, then bounce it to disk for bringing into a digital audio editing program. Despite the sophistication of host software, with a few exceptions (Adobe Audition and Samplitude come to mind), we’re not quite at the point where a multitrack host can always replace a solid digital-audio editor.

Once the track is in the editor, the first stop is generally noise reduction. Sound Forge, Adobe Audition, and Wavelab have excellent built-in noise reduction algorithms, but you can also use stand-alone programs such as Diamond Cut 6. Choose a noise reduction algorithm that takes a “noiseprint” of the noise, and then subtracts it from the signal. Using this simply involves finding a portion of the vocal that consists only of hiss, saving that as a reference sample, then instructing the program to subtract anything with the sample’s characteristics from the vocal (Figure 1).
There are two cautions, though. First, make sure you sample the hiss only. You’ll need only a hundred milliseconds or so. Second, don’t apply too much noise reduction. About 6dB to10dB should be enough — for reasons that will become especially obvious in the next section. Otherwise, you may remove parts of the vocal itself, or add artifacts, both of which contribute to artificiality. Removing hiss makes for a much more open vocal sound that also prevents “clouding” the other instruments.

DELETE SILENCES

Now that we’ve reduced the overall hiss level, it’s time to delete all the silent sections between vocal passages. If you do this, the voice will mask hiss when it’s present, and when there’s no voice, there will be no hiss at all (also see the Power App Alley in this issue on Sonar 6, which describes how to reclaim disk space when removing silence).

With all programs, you start by defining the region you want to remove. From there, different programs handle creating silence differently. Some will have a “silence” command that reduces the level of the selected region to zero. Others will require you to alter level, like reducing the volume by “-Infinity” (Figure 2). Furthermore, the program may introduce a crossfade between the processed and unprocessed section, thus creating a less abrupt transition. If it doesn’t, you’ll probably need to add a fade-in from the silent section to the next section, and a fade-out when going from the vocal into a silent section.

REDUCE BREATHS AND ARTIFACTS

I feel that breath inhales are a natural part of the vocal process, and it’s a mistake to use hard disk recording to get rid of these entirely. For example, an obvious inhale cues the listener that the subsequent vocal section is going to “take some work.”

That said, applying any compression later on will bring up the levels of any vocal artifacts, possibly to the point of being objectionable. I use one of two processes to reduce the level of artifacts.

The first option is to simply define the region with the artifact, and reduce the gain by 3dB to 6dB (Figure 3). This will be enough to retain the essential character of an artifact, but make it less obvious compared to the vocal.

The second option is to again define the region, but this time, apply a fade-in (Figure 4). This also may provide the benefit of fading up from silence if silence precedes the artifact.
Mouth noises can be problematic, as these are sometimes short, “clicky” transients. In this case, you can sometimes cut just the transient, and paste some of the adjoining signal on top of it (choose an option that mixes the signal with the area you removed; overwriting might produce a discontinuity at the start or end of the pasted region).

PHRASE-BY-PHRASE NORMALIZATION

A lot of people rely on compression to even out a vocal’s peaks. That certainly has its place, but there’s something you need to do first: Phrase-by-phrase normalization. Unless you have the mic technique of a k.d. lang, the odds are excellent that some phrases will be softer than others. If you apply compression, the lower-level passages might not be affected very much, whereas the high-level ones will sound squashed. It’s better to get the entire vocal to a consistent level first, before applying any compression. This will retain more overall dynamics. If you need to add an element of expressiveness later on (e.g., the song gets softer in a particular place, so you need to make the vocal softer), you can do this with judicious use of automation.

Referring to Figure 5, the upper waveform is the unprocessed vocal, and the lower waveform shows the results of phrase-by-phrase normalization. Note how the level is far more consistent in the lower waveform.

However, be very careful to normalize entire phrases. You don’t want to get so involved in this process that you start normalizing, say, individual words. Within any given phrase there will be a certain internal dynamics, and you definitely want to retain them.

ARE WE PREPPED YET?

DSP is a beautiful thing. Now our vocal is cleaner, of a more consistent level, and it has any annoying artifacts tamed — all without reducing any natural qualities the vocal may have. At this point, you can start doing more elaborate processes, such as pitch correction (but please, apply it sparingly and rarely!), EQ, dynamics control, and reverb. But, as you add these, you’ll be doing so on a much firmer foundation.

Sunday, August 3, 2008

TV Intro Song: New Day Ministries


A local TV program host hired me to create a video intro and outro for her TV show, which included creating an original song. I enjoyed the challenge of creating the song and video with a very limited budget and time constraints. I always have fun when I get to be creative.

I posted the mp3 version of the song in my personal SoundClick page here (click on the player below)

















I uploaded the video to YouTube, but the process took F - O - R - E - V - E - R! I think YouTube would be better off having people convert their videos into FLV first, then uploading instead of the other way around. Just my $0.02 worth.

For the song: I tracked, recorded and mixed with Cakewalk's Project 5 using Dimension Pro and Amplitube for sounds and FX. I used Sony Soundforge for compressiong and mastering. I used Audacity to do last minute editing and converting to MP3 for upload to the website.

For the video I used Adobe's Premiere Pro and royaly-free video footage from iStockPhoto.

New SoundClick Widget: Live Ozarks Bluegrass




I am the live audio engineer for a bluegrass band which regularly plays at the historic Star Theater in Willow Springs, Missouri. I occasionally bring my recording gear and simply capture the outgoing audio signal from the soundboard (before it gets further processed by the EQ). If I enjoyed the show or a few songs, I will edit the songs at home and burn a few CDs for the band members. I might even post the songs up on SoundClick and blog about it as well.

I put the SoundClick widget for those recordings up on this website today. I do not play on any of these recordings. I simply run the live sound and capture the outgoing audio from the soundboard. No extra signal processing, EQing... nothing. These are RAW sound files.

I will do some slight audio editing in Audacity and perhaps a very slight amount of EQ work. That is all I do to these files. Then I upload them to SoundClick and I'm done.

Tuesday, July 22, 2008

M-Audio Release "Overdub" Comics



M-Audio's new comic book-style studio guide covers all the basics, plus advanced tips and techniques. Volume One: Studio Monitors explores reference monitoring-including proper setup and installation.

Free Audio Loops and Samples

Here is a list of websites I keep in my Google bookmarks for free audio loops and samples.  There are of course many more sites available than these, but these are the ones I like to use.  If you want me to add more to this post, comment on it and give me your sites.  Thanks!
Those are the ones I use regularly.  Enjoy.

Monday, July 7, 2008

Lady Mondegreen

I've got to use that as a title for a song at some point. Read what the folks at Webster dictionary are doing with it. I love mondegreens!

Saturday, April 19, 2008

EQ Magazine: Managing Multisamples With SFZ

Key Issues: Managing Multisamples With SFZ

| April, 2008

The following may seem techy, and, frankly, that techy aspect inhibited me from checking out the SFZ file format. But once I finally wrapped my head around the concept, I was glad I did.
The SFZ file format—a license-free spec, even for commercial purposes—was created by synth designer Rene Ceballos, and it defines how multisamples should be handled within an SFZ-compatible instrument. The format is compatible with several Cakewalk instruments, including Dimension, Session Drummer 2, Rapture, and DropZone. But it’s also compatible with the free, VST-compatible SFZ Player that works in any VSTi-compatible host (download the Player at www.project5.com/products/instruments/sfzplayer/default.asp), so the format’s usefulness extends far beyond Cakewalk instruments.

For example, suppose you work with Samplitude, you’re collaborating with a friend who uses Cubase, and you have a bunch of “found sound” samples you want to use as rhythmic elements. If you create an SFZ file of these sounds, and you both download the free SFZ Player, you can exchange keyboard parts that trigger these samples in the SFZ Player. What’s more, the SFZ format accommodates Ogg Vorbis (compressed) files, so you can use really big files, but compress them for faster file transfers over the net. When it’s time to mix down, simply change the SFZ file to reference the original WAV files instead of the compressed ones.

SFZ BASICS

You can think of the SFZ format as being similar to SoundFonts, but an SFZ file has two components instead of one: a collection of samples (typically stored in a folder), and a text-based definition file that describes what to do with that collection of samples. You can create an SFZ definition file in any simple word processor such as Notepad.

For example, suppose you sampled a Minimoog at the F key for every octave over a five-octave range (F1, F2, F3, F4, and F5). You can then create an SFZ file that references these waveforms, and describes the root key of each waveform, as well as the keyboard range each waveform should cover.

But those are just the basics. The SFZ format can also specify detuning, transposition, filtering, envelopes, sample start time, looping, and many other characteristics. Waveforms can overlap, and you can define as many waveforms as you want in an SFZ file. It’s therefore possible to specify a complete instrument using SFZ, and if you load that SFZ file into an SFZ-compatible instrument, it will play back exactly as you intended.

A PRACTICAL EXAMPLE

Creating an SFZ definition file requires some programming chops, but, fortunately, the commands are pretty simple and musician friendly. To elaborate further on the Minimoog example mentioned above, I recently created a multi-sampled set of Minimoog waveforms suitable for loading into SFZ-compatible synths. I sampled the notes at consistent notes for the various waveforms, and gave them consistent names (SawF1.WAV, SawF2.WAV, TriangleF1.WAV, and so on). I stored all the samples in a folder titled Minimoog Waveforms, then created an SFZ definition file for the sawtooth wave samples that defined the note range covered by each sample. Once I created that file, creating another SFZ file for the triangle wave samples simply involved doing a find on “Saw,” replacing each instance with “Triangle,” and then saving the file under a different name (MinimoogTriangle.sfz). I did the same thing for the Pulse, Square, and other waveforms.

Once these SFZ files were done, I could load one into Rapture. The multisampled collection of waveforms then became a single “element” within Rapture (think of an element as roughly equivalent to a voice). I’ve also created multisamples with guitar notes, drum sounds, effects, and various other sounds.

The main “unit” of an SFZ definition file is the region. Here’s the syntax for creating a simple region:

pitch_keycenter=F1 lowkey=C0 hikey=C2 Sample=Minimoog Waveforms\SawF1.wav

This says that the sample being used has a root key of F1, and should cover the key range of C0 up to and including C2. To reference where the sample comes from, an added “Sample” designation points to the “Minimoog Waveforms” folder and after a backslash, specifies the file name within the folder.

Creating regions for the other samples simply involves substituting some different names, root notes, and key-range values. You can also add comments, as long as the line starts with two slashes. Figure 1 shows a file that defines a complete Minimoog sawtooth wave multisample with five samples.

For more information on the SFZ format, including a complete list of commands (opcodes), surf to www.cakewalk.com/DevXchange/sfz.asp, or check out the book Cakewalk Synthesizers by Simon Cann [Thomson Course Technology]. Granted, not everyone will get into programming SFZ files, but I’ve found it to be a tremendously useful format for creating sophisticated multisample collections that can play back in a variety of instruments.

My Music - The Phos

 
#Google Analytics