Music Production

DAW 101

A DAW or Digital Audio Workshop is a computer application that processes sound for replication. DAWs often come with a host of included plugins, otherwise known as effects or FX.


DAWs synchronize audio and synths to a clock to allow them to manipulate data. Plugins make sounds. Midi.


Plugins

Plugins go by a few different names. VSTs(Virtual Studio Technology) on Windows, AU(Audio Units) on Mac, AAX for Pro Tools, so on and so forth.

Plugins can do a variety of things from synthisizing sounds, changing volume,


Order of FX

The order of effects can help or harm a producer or musician depending on their placement.


Some common rules are:

  1. stereo processing comes first for sounds that require mono compatibility
  2. compress sounds before saturating
  3. [EQ]cut before compression and [EQ]sculpt after your processing is set
  4. use delay or reverb last for a clean tail, use delay or reverb before compression to create near infinite "blow outs"

Compression will make sounds loud up to ±0db. After sounds reach the sound cieling, compression will turn them down. If that does not highlight the value of placement, i don't know what will.

A loudness first approach says we will always compress first to crush the waveform and accentuate the fragile highs in a non-destructive manner. It is still possible to achieve loudness with distortion only, but doing so may cause ear fatigue in the listener. When music hurts to listen to, you lose listeners and then you lose money and then fans and then oh no there goes my royalties lol.

That then begs the question of where filtering or phase sweeps go.


Sidechain

creates room for drums in our mixdown. It is accomplished using any of the methods below, but each use case is relative.

Compression Sidechain
  1. Route your targeted output signal into the compressor.
  2. Set your attack, release, and ratio.
  3. Decide if your compression sidechain will duck the reciever in full

Compression sidechaining is not an accurate method as it computes incoming signal in real time. No matter how effective your hardware is, there could be a 10ms delay up to 100ms delay in the form of bloom. You may learn more about parameters in compression.

The best use case of compression sidechaining is to duck one instrument using signal from a second instrument. Some examples would be a broadband sidechain with a pluck ducking a pad or sustain, ducking the top of your kick with signal from your snare to increase headroom on a 4/4 pattern, or to make vocals pop in a mix that is overwhelmed by midrange


In-Line Automation
  1. Right click the volume fader of your sidechain group.
  2. Match your sidechain shape to your kick drum. Flatten a render of your processed kick to prevent bow-tying in your sub.
  3. Copy this sidechain shape and paste it anywhere your kick plays or drums and bass are playing at the same time.

Inline automation is the act of scripting your sidechain. Because the daw is reading code instead of reacting to an input, you can get a snappy attack on your sidechain, increasing the pop, or transient, of your drums.

Additionally, you may lead your sidechain in specific parts of your song. By easing your sidechain in before the drum hits, you can get a louder bang with less volume. This is best used on every other snare or before major transitions.


Limiter Overload
  1. Set a limiter on your master.
  2. Boost the signal of your desired sound until you achieve the desired affect.

Because limiting is brickwall compression, it still has to process incoming signal. If your signal is snappy enough and loud enough, it will force your limiter to duck other sounds to clear headroom for your loudest source. This is best used with transient shaping and on drums.


Recording

Please refer to:

  1. Phase for microphone etiquette.
  2. Noise for elaboration on noise floor/cieiling/ratio, etc.
Best Practice
  1. Performing a piece correctly is more about presenting a product than it is worthy of a listener's attention than it is about pride. Respect your fans and they will respect you.
  2. Once a song is published, there are no do-overs. Tune your drums. Change your strings. Make your vocal delivery personal.
  3. Practice your own parts to a metronome to avoid frustration in the booth.
  4. Expensive cables and microphones do not clean the dead skin and oils from your instrument. Buy fresh strings for the clearest audio quality.
  5. Is your signal red? It is clipping.
    Everything else in the chain will carry your clipping noise up to the master output until it is present for our ears to bask in until the very end of time...

    Turn it down.
Hardware Level

Gain staging for a microphone or line-in is all about balancing silence. For most sources, best practice is to set one's own input gain to -10 decibles, and to record in mono. -10db is loud enough to provide natural gain, but quiet enough to prevent baked in distortions. Any inputs quieter than -30db allow for artifacts from hardware and software to bleed into the recording. Those artifacts will make compression a nightmare to manage.

Software Level

The recording is now in the DAW. All performance flaws and imperfections birth the soul of a recording.


Vocals

Processing Vocals

A rule of thumb when processing vocals is to focus, rather than cut. The human voice has a built in bandpass filter called "lips". Lips are a device our ancestors have used since the dawn of time to captivate and enthrall.

Now that we understand our instrument, first steps should be a de-esser and a gate. If this is your first introduction to raw audio, the gate will kinda stick its hand on the forehead of background noise and tell it to keep swinging. because the gate will not let it into the mix. that simple. Your de-esser will read frequencies where human voices generally produce the most unwanted noise. It is one of those things that you dont know its on until you take it off. Invest in a good one.


In most modern music stylings, a reliable multiband compressor will do so much work. Your vocalist's microphone provides its own EQ. Multiband compression will bring out the $40,005 audio quality at no added cost. Try to keep it under 24% and add no more than you need. Even 9% can do it if the vox fit the mic.


After compression, you have the question of stereo processing.

don't.
problem solved.

Your lead vocals are now in mono and ready to receive some polish. You should add your first EQ before or after the multiband compressor. Feel it out. When using compressors, our EQ objective is to cut around the second octave to remove any background frequencies and use as few bell curves to nudge any heady bass tones or flat highs in our favor.

You may now look into low mixed reverb or my personal preference; delay. Reverb can get cakey like wet flour or drying paint. If you have lots of staccato notes, pick delay. It is much easier to control and provides the same richness as reverb. Read my delay topic to learn why.

Once you have chosen your prefered room sound via delay or reverb, the last step is gainstaging. I use saturators becuase they provide more volume with less clipping. If you're uncomfortable with putting a distortion on your vocals, you may alternatively compress the vox behind your time fx and throw a soft clipper at the end.


Mixing Vocals

I sidechain my vocals into my mixdowns. Whether it is a guitar, a pluck, vocals, percs, kick, snare... we use our vocals as an input signal to make room.

In addition to creating room for the vocals, you want your stereo mix to come from comping and harmony. Take Billie Eilish for instance. Her harmonies will be hard panned like guitars while her lead vocal track will be hard center. The lead vocals...[to be continued]


Guitar

Please refer to:

  1. Distortion for achieving all guitar tones.
  2. Phase for microphone etiquette.
Analog Method

Recording a clean signal while monitoring through a real amp and speaker is a must. The trick is to mirror one's guitar input to a dedicated amp-out.

Digital Method

Bass down as volume increases


Bass

Bass is always recorded in mono. Standing bass does not require much beside a room with acoustic treatment and hard surfaces with optional wall and cieling diffusion. Bass guitar is more complicated and will be the only bass mentioned from here forward.


Bass guitar has become standardized by a direct input and a Fender Power Bass, or P-Bass. Direct in is optimal as it removes the potential phase canceling from a recording source. Frequency splitting is the best way to achieve clarity and texture.

Frequency Splitting

Clone a recording of the bass track without any modifications. One is your dedicated mono sub. The other is your bass top. This one may be micd up to a live speaker cabinet, processed digitally in your guitar group, or treated strictly as a clean digital track for ease of EQing.

Filter the high end out of the sub channel at octave 3 at a slope of -6db. Optionally raise your cutoff frequency 7 semitones from the root to allow wiggle room on melody forward songs. This layer provides both sub and bass presence. The only assistance the bass top will provide is in the mid range. highpass filters will be applied on the master. Any cutting of the low end at this stage will negatively affect the bass response on premium listening setups such as cars or home entertainment centers. Only use a shelf to reduce bass distortion at the master level.


For the bass top, filter the low end out beginning at the 4th octave at a slope of -12db. I reccomend that lowpass filters are only added to the pre-EQ to reduce mechanical noise. This top can now be treated like a guitar or like a synth. Place white noise generation behind the EQ to create a humming fuzz. Use distortion to add mid-range. Do stereo processing for textural enhancements and improved dynamic presence in stereo settings. Add flangers to make the bass melt. The sky is the limit.


Wind & Brass


Drums


Album cover

buy here

00:00
00:00