1. Why can’t I get my latency down to 0?
Despite the wealth of information online about what latency is and how it affects computer-based musicians, it’s still the most common issue I see people struggling to come to grips with. Quite often it’s blamed for issues that have nothing really to do with latency, or people have unrealistic expectation about the settings in their DAW that relate to it.
At it’s simplest, latency is nothing more than the delay between when you initiate an action, and when you hear the result (typically expressed in milliseconds). This could be playing a note on your MIDI keyboard to trigger a software instrument in your DAW, turning a knob on your MIDI controller to change an effect parameter, or how responsive your guitar feels when using software modeled amps and effects.
But what causes this delay, and how can we minimize it? Or more importantly in my mind, what can we consider is an acceptable latency?
In the simplest of terms, audio latency is caused when your soundcard sends data to and from your computer. After an analog signal is converted to digital information, the soundcard stores chunks of this data in packets to send to the computer. It’s the size of these packets of digital information that determine your latency. The size of each packet of data is typically user-adjustable in your DAW, you’ll see it expressed as “samples”. So a setting of 512 means that there’s 512 samples in each packet of data.
The larger the packet size (i.e. the more samples it contains), the more reliable the data transfer, and the less CPU-strain on your computer. Of course, a larger packet size also means that your latency increases as the soundcard drivers have to wait longer to fill a packet before then sending it to the computer (and vice versa, from the computer to the soundcard). So setting your latency becomes a trade off between faster packet transfers, and lower CPU overhead.
There is no such thing as 0 latency, all soundcards need to transfer data this way.
Luckily most soundcards today can operate and remain stable at pretty low latencies. I typically keep my soundcard buffers set at 256 samples, which gives me a round trip audio latency of about 13ms. For me at least, this is a perfect trade off between having a responsive control over my software instruments, while maintaining a reasonable CPU overhead. Certainly many soundcards can go lower than this, but honestly I rarely find I need to do that myself.
It’s easy to fall into the trap of thinking you have to set your latency as low as possible, but it’s important to keep this all in perspective too. The average speed of sound in normal conditions is about 1 foot per millisecond. So having a latency of around 10ms is roughly equal to the time it takes sound to leave a speaker and reach your ears from ten feet away.
Bands and acoustic musicians have been performing with unparalleled sync amongst themselves for centuries at these types of distances (or even greater). Blaming your soundcard for not being able to achieve ridiculously low latencies as the reason your playing is sloppy seems a bit excessive, no? There’s nothing wrong with using lower latencies, but at some point it almost becomes academic IMVHO. Find a balance between responsive playing, and a manageable CPU load, and don’t worry if you can’t set your latency any lower.
2. People always say I should layer my drums to get really fat sounds, but the more layers I add, the worse it sounds.
Well, like a lot of things when it comes to making music, more doesn’t always equal better. When you layer sounds that share the same frequency ranges, some of those frequencies will cancel each other out, and some will sum to form louder frequencies. After awhile, you add so many layer that you end up just getting an undefined bunch of mush instead of a slamming drum sound.
Generally I find that two to three samples layered works well. The key is to not only choose great sounding drum samples in the first place, but also ones that compliment each other well. For instance, when layer kick drums, I’ll often use a really deep subby kick to provide the oomph to the sound, and a brighter kick with more click and beater head sound to provide the character.
The other thing to pay attention to is that the samples are lined up as close as possible so you don’t get flamming. This is when you hear a very short delay between the attack of one of the drum layer and the other. Instead of the two sounds combining to form something new, they end up sounding like just sloppily layered drums. Huge pet peeve of mine! Take the time to slide one of the samples forward or backward a couple milliseconds at a time until you find the best location where the samples join to form a single, cohesive sound.
3. What’s the best synth for making deep house type of music?
Error. Question does not computer. Error.
Honestly I don’t think I’ve ever seen a synth marketed at only one type of genre. Any good well rounded multi-purposes synth should have the facilities to allow you to sculpt whatever kind of sounds you want, regardless of the intended genre.
Except for NI’s Massive, only Dubstep people use that one (I kid, I kid!).
nstead of worrying about which synth other people use in their songs, instead focus on learning the synths you have at your disposal.
Well, that’s it for this week. As always, if you have any questions you want me to answer on the blog, drop me an email or post it in the comments. Just a quick reminder that all of my Production Guides are now in nicely formatted PDF versions too. Great for E-readers or iPads if that’s your thing.