OUT OF THE VOID
27-07-11 | Production Q&A #2

The three questions in this post came from people who wanted to remain anonymous, and the more I think about it, the more I see their reasons. So I think from here out, any questions I field for the Q&A will only be posted anonymously. Hopefully that will get more people to submit questions as well. So, let’s get to it.

1. Why do I need a better soundcard if I don’t record anything? Is it really worth spending more than $1000 on?

In the past when most people were still using hardware synths and drum machines, not to mention other “real” instruments, it was easy to justify the expense of a really nice soundcard. In fact, I often said it was one of the first things that people should upgrade, because having really good A/D’s (analog to digital convertors) made a very noticeable difference in the quality of anything you recorded.

These days with so much of the entire production process happening entirely “in the box” for some people, the advantages can seem less tangible. It’s entirely possible to achieve 100% professional results with nothing but the built in audio interface in your laptop or desktop after all. However, I do think there’s still some good reasons for going with a separate, reputable soundcard:

– Lower latency, typically. For the most part, a professional soundcard is going to offer you lower latency with less CPU overhead. I’ll be the first to admit I think some producers place too much emphasis on judging a card based on ridiculously low latencies, but to a certain extent it can help. For those people performing with virtual instruments in real time, the benefits are certainly noticeable to a point. For me, anything less than 128 samples is fine for all my needs, live or in the studio, but most of the really good soundcards can go still lower with out too much increase in CPU overhead.

Just remember that sound travels roughly 1 foot through the air per millisecond. Musicians have been jamming for years and staying in time standing 10 feet or more away from each other. So you don’t need super low latencies to get your point across.

– Stability. Drivers from higher end soundcards tend to be updated more often, and of higher quality in terms of how often (or not!) they crash. Cheap gaming soundcards might claim to be of the best quality, but more often than not it’s their owner’s who are the ones running into the most issues and posting for help on forums.

– Imaging. Good convertors can do amazing things compared to just ok ones. Your music will sound like it has a better sense of space, depth (front to back imaging), left to right localization (how easy it is to accurately tell where an instrument is in the stereo spread). Just remember that this only affects what YOU hear though, the actual audio is going to be recorded the same no matter what D/A you’re using.

The benefit comes from the fact that this increase in clarity can help us better know how much reverb is too much, or when perhaps we’ve panned instruments just a little too close together, or if we’ve applied too much of that magical stereo widener plug in and the mix now sounds off-balance. Basically, when you can hear better, you can make more accurate decision when writing and mixing your music. And those decisions can even have an effect on how people with lesser playback systems hear what you are releasing.

At what point does this increase in quality start to outweigh the cost of upgrading? If you’re looking for your first soundcard, or even just something portable to use live, then I think you should realistically be looking to spend around $200-300 at least. You’ll get a noticeable increase over the stock soundcard (many of which aren’t THAT bad these days anyway), and you’re likely not spending more than you can hear anyway.

What I mean by that, is that it makes no sense to spend $2000 on a soundcard, if you’re still using $200 speakers in an untreated room. All of these things work together, and I think most people will go through phases where they get the best results upgrading everything over time in cycles. Speakers, soundcard, acoustics, speakers, soundcard, acoustics, etc. Or you have a lot of money and get top-notch stuff right off the bat, boo hoo for you.

After the $200-300 price range, I think you’re realistically going to have to spend $500 -1000 on a card to get any real noticeable increase in audio quality. For most producers, this is probably as much as they’ll ever spend on an audio interface, and unless you’re putting a lot more money into your monitoring chain and acoustic treatment, spending more might not really yield you that great of a difference in terms of pure audio quality.

It’s the law of diminishing returns, the more you spend past this point, the less the differences will be in terms of how much better things sound. Sure a $3000 interface will probably sound better than a $1500 one, but if the difference is only around 5% better, is that worth another $1500? I think at that point you’re in one of two scenarios. You do this for a living and the difference is pretty noticeable and useful to your job, or the rest of your studio is up to where you want it, and then the price difference makes more sense. Either way, for 95% of most musicians, I think spending more than $1000-1500 on an audio interface is probably not going to net you any huge advantages.

2. Why do some people say normalizing my audio files is bad, and others say it’s not a big deal?

The thing to realize about digital audio, is that any time you perform ANY operation on an audio file, you are almost always destructively altering the file. That is to say you are in some way (often smaller than you can imagine), permanently altering the file in a manner that is irreversible. Digital audio processing involves math, and this kind of math often results in numbers greater than we can reasonably store 100% accurately, so things get rounded.

Now there are a lot of ways that we have learned to minimize the extent of this, through things like floating-point processing, or dithering for instance. But from a theoretical standpoint, operations like normalization are a form of destructive processing, and to many people that should be avoided whenever possible.

But, let’s step back for a second and look at it from a practical standpoint. Say you’ve rendered or exported your latest song, and you realize that the highest peak is at -5dBFS. Of course this means that you’re not using all of the available bit resolution of your audio file, which to most people just simply means that it sounds kind of quiet.

In this scenario, the safest, and least intrusive method of raising the volume so we are using all of the file’s bit-depth, is to normalize it (with caveats, to be explained shortly). We simply add 5dB to each and every bit, raising the overall volume with only some tiny rounding errors in the very lowest bit as a downside. In comparison, what are the other ways that we can also raise the volume of the audio?

Well, the usual suspects are limiting, compression, or clipping, and I don’t think that anyone would argue that these alter the original audio less than normalizing does. So from a practical standpoint, normalizing is the safest, and cleanest sounding way to raise the audio level to use as much of our recorded format’s available storage capabilities.

There is one downside though, and that is the fact that most of the time, normalized files really do use ALL of the available dynamic range, which means that they peak at exactly 0dBFS. This can lead to a problem called Inter-sample Modulation Distortion. Google it for the details, but the short version is that when you have multiple, continuous samples playing back at 0dBFS, some digital to analog convertors will actually produce small amounts of distortion. The issue is less common these days than it used to be early on in the digital era, but it’s still something to be aware of. Read on.

I usually tell people to think about WHEN they are normalizing, before they decide to or not. For instance, if you’re working with audio files in your DAW, then normalizing is probably not a problem at all, because that audio file is not the final product going to the listener. It’s still going to pass through the audio processing of the DAW, be turned down by track or master faders, or affected by master channel effects, etc. Basically, the normalized audio file is not going to ever be played back at full scale (true 0dB), so Inter-sample Modulation Distortion (IMD) is not a concern. Your D/A will never see that 0dB reading that the raw audio file is recorded at.

However, if you’re mastering your own music, or generating some other files meant to be listened to immediately afterwards (typically on CD), then you should rethink normalizing. At the very least, use a normalizing tool that at least allows you to set the final output level to something other than 0dB. Setting it to -0.3dBFS will change almost nothing in terms of how loud it’s perceived, but it’s just low enough to avoid almost all instance of IMD.

So, to summarize. When you’re writing your track, normalizing audio files is fine, it’s probably the cleanest way to boost the volume of your audio files for whatever reasons you have. When you’re generating the end product, something meant to be listened to by others with no other processing, then make sure you only normalize if you can manually set the final output level to something other than 0dBFS.

3. Why are Macs better than PCs?

HA! I’m not touching that one, nice try! Silly blog trolls….

——————————-

As always, I hope some people find all this useful. Feel free to send me more questions, discuss this Q & A in the comments, or pass this on to anyone you feel might be interested. I’m leaving tomorrow to play the Photosynthesis Festival I’ve been blogging about recently, so if I don’t reply right away I’ll do so as soon as I get back next week.

Don’t forget you can sign up for email or twitter notifications of new postings, and please click that “LIke” button inthe upper right of the page if you enjoyed reading this. Thanks, and I’ll have some new posts soon!

Tarekith