Phase is such an elusive and often misunderstood subject that it’s been said to set apart a pro engineer from an amateur. That is, of course, a gross exaggeration. Still, it illustrates how vital yet slippery the subject is, mainly because it only becomes audible in relation to another track.
A flipped-polarity microphone by itself would generally sound without any fault until one adds a secondary mic pointed at the same source at natural polarity. Add to that, that even though phase-related issues and their proper solutions vary largely from one to another, they are often addressed simply as ‘phase issues’, adding further confusion which may result in applying a dice throw approach (“Just click the phase button”) and potentially causing more harm than good.
This article will overview the three main types of what are generally referred to as ‘phase issues’ and the best solution for each. Let’s unphase the haze.
Take two sine waves at the same frequency and level, flip the polarity of one of them and you’ll end up with absolute silence. That’s cool, but of course, the stuff we record is exponentially more complex than a sine wave.
Engaging the polarity switch on the mic pre-amp or console is effective in matching the polarity of vintage microphones made before the standard of using pin 2 for the positive polarity terminal of an XLR connection was established, or when pointing two mics from opposite sides toward a sound-generating membrane, such as a loudspeaker or snare drum, where air compression and expansion occur in opposite directions on each side of the membrane.
For other use cases, flipping a mic’s polarity may help bring back some canceled-out frequencies while canceling others, making potentially more damage than good. But frequency cancellations are only part of the story.
Given enough sine waves at different frequencies, levels, and envelopes, we can theoretically reproduce any acoustic or synthetic sound. You may be familiar with Additive Synthesis, which is based on this concept.
For the purpose of this, let’s imagine our complex sound is made out of two, one-second-long oscillating sine waves consisting of a fundamental frequency of 110 Hz (A2) and a harmonic frequency of 220 Hz. We’ll place two microphones pointing at the source - the first one right at the source and the second mic at a distance of 1.5 meters.
Let’s take a little break from our story for some hard, cold info:
— A 110 Hz sound wave has a wavelength of ~3 meters (rounded)
— A 220 Hz has a wavelength of ~1.5 meters or ½ the length of our 110 Hz, A2 note fundamental frequency
— The speed of sound in dry air at a temperature of 20º celsius is 343 meters per second
— The speed of sound is equal in all frequencies
— At this speed, the time it’ll take our sound to reach our distant mic will be ~5ms
If you’re still reading, let’s jump right in.
By the time our sound wave reached the second microphone, our fundamental 110 Hz frequency had just completed half of its cycle and our 220 Hz harmonic completed 1 cycle.
Assuming we did a good job at matching the levels of both mics when we’ll sum them together, we’ll completely lose our fundamental frequency and our 220 Hz harmonic would be twice as loud.
For this case scenario, flipping the phase polarity would bring back our fundamental frequency but make the 220 Hz disappear.
Maybe we didn’t place our distant mic in a good spot? Actually any distance we may choose will cause a pattern of boosting some frequencies and attenuating others. This destructive and constructive interference is known as comb-filter.
Other than frequency cancellations, another side effect of the delay between the mics is transient smearing.
It’s like a film projector whose colors are misaligned. It could be a nice effect when one's going for it creatively, but one you would rather avoid if you’d like to experience the true colors and depth of the original film.
Therefore the best solution to truly fix this issue would be to compensate for the delay between the microphones and align them in time. This can be done either by measuring the delay between the mics and manually applying a correction or by using a plugin such as Auto-Align®, which is able to sample-accurately detect and compensate for the delay and phase polarity between the mics automatically.
However, in a case of a film set shooting, where the distance between the boom microphone and the actors’ lav mic is ever-changing, a significantly more complex solution would be required, one that will be capable of continuously measuring and adaptively applying a sample-accurate delay correction, compressing and expanding time transparently. Enter Auto-Align® Post.
But the comb-filter effect is not the end of the story just yet.
A phase shift can also be described as a positive or negative delay of a given frequency. A positive delay of a quarter-frequency cycle can be expressed as a 90º phase shift.
Why am I telling you this? Because electronic filters, such as a high-pass filter on your mic or pre-amp, cause a phase shift by their nature, and to a variable degree across the frequency range.
If only one of our mics had its HPF engaged, or the filters were set at a different frequency or made of a different design, we’ll get a mismatch in their spectral phase correlation, even though we time-aligned the microphones.
Bring on the Phase Rotator.
A phase rotator is a circuit built using an all-pass filter, meaning it doesn’t change the frequencies levels but rather changes the phase relationship between them. It’s like an EQ, albeit without the frequency balancing effect.
Figure 3 shows the different phasing of the frequencies of a recording, while Figure 4 shows the corrected phasing.
To Sum it Up
Using all three tools together as necessary, polarity matching, time alignment, and phase rotation, we can achieve a most powerful and vivid reproduction of our recording. To take the guesswork out of time/phase alignment and help make things better, we built Auto-Align® Post.