Canadian Musician - November/December 2018 | Page 31

DIGITAL MUSIC One would never assume that a bunker in an Atlantic Canadian forest would house an incredible studio operated by Canadian musical prodigy Nick Fowler. Nick is known primarily for his TV compositions, having written the themes for Daily Planet and The Social. His solo project, FWLR, grows to fan and blog acclaim. With a focus on writing electronic music that lives outside of the box, FWLR is carving his own path through the global musical landscape. Hear music at Soundcloud.com/FWLRmusic and FWLRmusic.bandcamp.com, and hit him up on any social media platform @FWLRmusic. By FWLR Mixing in Three Dimensions M ixing audio. It can seem like black magic mastered only by the most highly-skilled professionals, but in reality, it is an art form that anyone can learn. In this article, I’d like to share an analogy that I have developed over the years that helps me make informed decisions and achieve the desired mix. When I think of a song, I picture it as a three-dimensional space. The x-axis (left to right) is panning and stereo width, the y-axis (up and down) is frequency range, and the z-axis (front to back) is volume. Most troubles in a mix occur when too many elements are in the same location of this 3D landscape. This results in a lack of clarity and a cluttered mix. In theory, we want that landscape filled up evenly with elements that have their own place and complement each other. “Great! Now, how do we actually do that, Nick?” I’m glad you asked. Stereo Field Let’s start with the x-axis and talk a bit about the stereo field. Our brains use the phase relation and amplitude of the signals from our two ears to determine the direction and width of a sound. If both ears hear the same sound with the same phase/amplitude, we perceive that as being centered directly in front of us. If the amplitude of one ear is higher, we perceive the sound as coming from that direction. OK. Enough of the nerdy physics talk. Let’s discuss how this applies to music production. When I am mixing a song, I usually place the supporting elements like keys, pads, chords, supersaws, percussion, etc. wide in the mix. It creates a nice harmonic bed for the main melodic elements to live in. But I also try to marry these wide chords/pads with a centered mid-bass to anchor the sounds – something that doesn’t take up much real estate in the fre- quency spectrum (y-axis) but ties the super wide unison/reverb’d chords to the mono sub bass in the very bottom end to form a cohesive bond. Now that the width of the track is established, the main melody/hook (vocals, synth, cat sample, etc.) has a nice spot on stage, centered and ready for the listener to focus on. Although your chords/guitars/pads may have a frequency range of, say, 100-20,000 Hz, which fills up almost all of the vertical space on our stage, we have pushed them to the side, leaving the centre open. This effect can be heard most notably in rock music, where the guitar tracks are almost always recorded twice and panned on either side. Since the guitar and vocals occupy a very similar frequency range, space is created by moving the guitars to the sides and allowing the vocals to sit in the middle. (We won’t get into how to make things more or less stereo in this since that’s an entire topic in itself.) W W W. C A N A D I A N M U S I C I A N . CO M One of the most common problems I hear when people send me tracks is that they have stereoized almost every element in the track, making them super wide. It’s not uncommon for me to hear only the kick and snare in the centre, while every other element is super panned (usually by using some phasey stereo enhancer plug-in). I think it’s be- cause having things wide is exciting and satisfying, but without any elements in the centre, the landscape has a big empty spot right in the most important place. It’s all about contrast, and if everything is wide then nothing is wide. You wouldn’t watch a movie without a lead character, right? Frequencies Next, let’s talk about the y-axis. Sometimes finding places for elements on the y-axis is more of a production/sound design decision rather than a mixing one, but there are still some things we can do in the mix stage to help designate spaces for each sound. One thing that is very useful is high pass filtering everything in a way that gets rid of any unwanted bottom end. For example, if your snare drum fundamental is at 200 Hz, then just get rid of all the stuff underneath that. If your distorted lead has some weird rumbly frequencies a couple octaves below the notes that are being played, just lop them off. I will often use dips in EQs to carve out frequencies in one sound to allow those same frequencies to shine in another element. So, if the vocals are in the centre of the stereo field and live mostly in the 200-5,000 Hz range, I will dip those frequen- cies in any other element that is also in the centre of the track. This may seem like common sense to anyone who has mixed music before, but picturing the mix as a soundstage really brings this practice into context. Volume Last comes the z-axis, which is volume. This is used to place elements forwards or backwards on the stage. Although getting elements in the track to be at the appropriate volume is very important, this has the least obvious effect on cluttering a mix, especially since modern music is so compressed. Still, the supporting elements should be pushed to the back to allow the focal elements to take the spotlight. Hopefully you’ll now be inspired to consider your own mixes in this way. Is your track 100 per cent wide?  Are a lot of the elements stacked up in the lower midrange? Did you forget to cast Brad Pitt for the centre role? Dig in and try to make full use of all three dimensions to give the listener an immersive experience. C A N A D I A N M U S I C I A N • 31