Ypliet Denour

Wave Shapes


Sine Waves
have a single fundamental and 0 harmonics. On a 2D plane, they make a parabola. In the real world, sines project a perfect circle. Wave-shapes may be tested using a speaker driver, a metal plate, and sand. For those on a budget, it would be more practical to install a stereo imager and set two voice in unison without random and low detune. You may utilize a sine in the same way you may use a phase.

Square Waves
only move up and down on a 2D plane, but they take the shape of a square when played in the real world. They have a near infinitely repeating harmonic spectrum which is best used with filters the cut the high end.

Saw Waves
lose high end at their peak. In other words, the volume of a saw wave's harmonics decrease in volume in a linear manner. This rounding of the top end is caused by an exponential loss of amplitude, much like you would see in a low pass filter.

Triangle Waves
add subtle harmonic content due to the linear sequencing of the waveform. you receive more highs than you would from a sine, but less than a saw. triangles were used in soundcards with low sample rates to mimic a sine, but have an equally important place in analog history.

Pulse Wave
is a form of wave shape manipulation that takes the beginning of a wave cycle and creates a linear fold, much like one would see in a square. There are few sounds like a pulse wave, but should be saved for use in deliberate situations. Where you gain high end, you will lose mids.



Phase


means many things. As an all-purpose definition, phase is position. Phase is your position in your stereo field, spectral field, and in time.

If your oscillator has a static phase, you may set it in any of 360 degrees from its starting position. Depending on where your waveform starts, it may increase or decrease the amplitude of your output signal.

A good phase heavily relies on you to abstain from canceling out other waveforms. Phase Cancelation will occur when any waveforms cross in your mix and invert.

The reason why stereo phasing is a common issue is because a pair of speakers will cancel each other when played off axis or with a delay. Because a stereo pair of instruments will play at the same frequency, it is normal for 2 signals to play exactly out of phase and reduce the amplitude of the instrument. To prevent this, we use a number of techniques. Unison on synths creates a carousel of waves starting at different points in a cycle. For analog, we record multiple unique takes for instruments or vocals, panning them hard left and right. More common techniques are desynced delay (haas effect), reverb, and manual panning.

The main way to prevent stereo phasing in a live setting is by controlling your mono compatibility. If your mix sounds as good in mono as it does in stereo, it will take a lot of effort to ruin your music in a live setting Lastly, a phase is a sine. all waveforms consist of inumerable sines. Equalizers use additive and subtractive phasing to change the spectral response of your signal. This in turn affects phasers, flangers, and other effects ranging from static filter sweeps to a plain bell curve. Once you understand this, audio processing becomes simpler.

There is a good chance that any sound you're looking for can be summed up as Resonance. Resonance in the bass, resonance on the top end, etc. By biasing your resonant frequencies, you can control a lot of the sound with little effort. This is easiest in squares which are harmonically rich. This can also be done in white noise because it has the same quality of harmonic content, but no specific frequency.

Alternative uses for phases

Taking two phases "centimeters" out of tune with eachother creates low frequency oscilation. This can be used on two sub basses to create a rolling vibration or to replicate the classic guitar bend squeal.

Increasing the resonance of filters (includes flanger, chorus, phaser) allows you to reharmonize or sweep existing harmonic content with more aggressive tonality.



Harmonic Series


The Harmonic Series is made up of fractions on the fundemental level. As a rule, you may add +1 note for every octave from 0 up.
Octave 0 (sub) takes up so much amplitude that it only has room for 1 note. This is why sub bass is monophonic in european derived music. If you do not stick to this guideline, your sub will oscilate as it phases with accompanying notes.

In octave 1 (bass), it is in your best interest to stick to 5ths (7 semitones apart). E and B, D and A and so on. The circle of 5ths creates something similar to a double helix. An easier way to visualize this is to call back to our +1 rule. Octave 1 now allows for notes to share 1/2 of the space.

Octave 2 (lower mid) now allows for 1/3rd of space without notes clashing. Just like how we had to split our 12 tone scale into 7 steps apart, 1/3rd would allow us to savely use - i will be fact checking this later

For octave 3 (mids), the +1 rule begins to disolve. Notes spaced 1 semitone apart now work in the context of larger chords. You can layer up from the sub/0 octave using the afformentioned tools on real instruments or in orchestrated pieces with no issues. Key takeaways for the 3rd octave are that all prevelant chords often take place here in middle C or around a4/440hz. Your dominants, flats, augmented, relative, dimishinished, minor, major, etc. will sound right

In regards to synthesis,
a harmonic series will tell you about overtones, organic treble bleed, where your strongest harmonic lies (if not the fundemental), the peaks and troughs in your (in)audible spectrum, and how best to use any waveform. I will never encourage you to use a saw wave because it sounds like a saw wave, but rather because saws have a built in cutoff and make your mids pop with less effort. Same goes for using a sine wave. If you need more bass, a sine wave will be loudest. A sine wave can also suggest the dominant note of your harmonies. You may place a single note sine behind any chord where you want to suggest a moody E in place of a prominent F# of a B Major. You may also use a sine to shape the timbre of your singer by backing their lead melody.



Distortion


Distortion and saturation
are much like rectangles and squares. All distortion is saturation, and all saturation distorts. The catch - Not all distortion will make a sound louder.

Distortion specifically comes from pushing harmonic content past its peak, a.k.a. clipping.


Harmonic distortion
in treble, distortion creates pink to white noise depending on the intensity of the high end. In bass, it will create humming and eventually the classic squared out sound.

When you combine sub distortion with high end, it will make humming. With mids, it creates gargling and cracking. You can bias the sub distortion to introduce a certain warmth that is difficult to create through any other means.

Most distortion created by bass with create a grinding sound due to bass oscillating 2x faster than sub.

Distortion is the product of pushing signal gain into a waveshaping unit. Waveshaping is a product of ⁠Phase. If you keep your wave shape simple, it will "square-out" your input signal and in essence, compress your signal. The output signal will have more peak amplitude over time, creating a louder output. When used sparingly, this is called saturation.

Saturation functions much closer to fader gain/output gain than it does distortion. While distortion will always happen after signal passes 0db, saturation will catch peaked harmonics on the way up and turn clipped signal into something more musical.


Clipping
is a term that is poorly defined. The act of clipping itself is not up to interpretation. In the context of live audio, clipping is objectively bad. Analog clipping rips speaker coness and overloads hardware, which causes heat damage.

In digital audio, clipping commonly happens before a sound reaches your master limiter. You can prevent this in the mixer by turning the output of your channels down, limiting in groups, and mixing quiet. As you reach for louder volumes, you will have to get more creative. Understand that if the master out doesn't clip, your DAW doesn't clip, and your master.wav does not clip.

If you limit your project, the clipping converts to distortion.

The confusion then comes from a misunderstanding of distortion. Your signal can go red for a number of reasons. If you didn't mean to go red and it sounds bad, the distortion is bad. No argument there. If the signal has gone red and it sounds good, keep it or figure out ways to recreate it intentionally with waveshaping and bias with EQ


In regard to streaming services,
Headroom may be lost when gainstaging is too aggressive. This is not a real issue since audio codecs do not actually "turn down" your audio. It is simply compressed/normalized to the loudest average. If you listen to songs on soundcloud and again on youtube, loud music is loud because the master is loud.

No amount of dynamic range can make your music compete with noisia or svdden death. You have to come to the playing field prepared to make sacrifices, whether sidechaining your non-percussion elements into other basses, manually comb filtering your basses to make room for layered basses, or simply brick-walling groups with a saturator and limiter. If you prefer your mixdown to be more transient and cozy (dynamic), remove some compression and distortion. It will change the sound of your audio and take you further from that heavy metal sound. You may then juggle your sounds to see how hard you can push the output on your limiter



Compression


Compression
is the act of normalizing a sound. Your noise floor/ceiling are set by the user or compressor. After your dynamic range is set, your signal is averaged using multiples. 10x would lead to very flat, quiet sound. 2x would lead to a dynamic, but volatile waveform.

It is best to use only as much compression as needed and saturate the rest. Multiple compressors in parallel will rarely benefit the end user unless the goal is to set multiple back to back and create an infinite decay or an overly noisy sound.

Ratio
is the factor in which a compressor averages an audio signal. It is not uncommon for high quality compressors to begin at 1:1.1, or 1x1.1 instances of input signal. as we reach 1:2, or 1x2 instances of input signal, your quiet sounds will be 2x louder than before while your loudest sound will be half as loud, all relative to the average volume of your input signal


Threshold
is a goal post. Any input after the threshold will be reduced however many times your ratio is set to. Volume may exceed the threshold, but only after surpassing so many decibles of gain per magnitude of ratio.


Range
is vital to compression. It allows the user to determine what noise is undesirable while also preserving the most dynamics.


Attack
is the amount of time a compressor needs to respond to input signal. A short attack creates snappy compression, but may be prone to pops and clicks. A long attack will be more stable, but requires time to meet your compressors output.

Analog compressors used to use tiny lightbulbs known as diodes. The input signal would create voltage. Analog compressors would respond to this using the diode. This phenomenon is also referred to as "bloom".


Release
like attack, functions on an envelope - refer to #adsr. Release plays with attack by holding the compression intensity after input signal has reduced or faded.


Expansion/Upward
Provided this is a feature of your compressor, expansion will focus on boosting your signal up-to and past your threshold.
Downward is the normal function of a compressor as its explained above. It is not typically a "feature" of compressors as downward compression is the primary function.


Knee
is the intensity of the compressor's specific output. As input signal reaches threshold cieling, signal will be reduced. Input signal below threshold will be compressed either in a linear fashion as per ratio parameters.

Imagine that the threshold is the rendevouz for the loudest and quietest sounds, while the knee determines how much give the compressor allows.


Lookahead
Looks ahead of your input signal. It will compensate for the bloom affect of your compression. By allowing the compressor to see what signal is coming before it arrives, you can reduce inconsistencies in the source.

Lookahead works both with and against attack depending on your usage.[...]


In defense of multiple compressors
A solid case use would be [eq>comp>vocoder>delay>comp>eq>saturator]. The reason for this is vocoders tend to make sounds much darker than before and also make your signal quieter if not inaudible. Compressing again average out your high end and low end and make the audible spectrum balanced once more while retaining your mids, depending on how aggressive your compression ratio is.



Noise


Dynamic Range
consists of your noise floor and noise ceiling. The noise floor is often below -60db on analog devices, but may reach -inf on digital devices. Your absolute noise ceiling will always be -0db. If you wish to lower your noise ceiling, this setting would be called threshold.


Noise Floor
is a concept that applies to all generative sources. This can be a microphone or a synth. The noise floor is variable and can consist of/include feedback from electrical interference, mechanical/digital artifacts, and other unintentional sources of noise.
In reference to gating or compression, the noise floor is a cutoff for the lowest source of input. This is vital in compression because compressors average the min/max of your signal.


Noise Cieling
Similar to threshold, noise cieling is the absolute max volume analog devices can record at.[...]



Stereo & Width


There are a few methods to achieve stereo depth and they are often some form of delay, panning, or unison(phase).

Haas: is a delay effect. The haas effect is any amount of separation of the left and right channel by/up to 20ms.

Panning: is the most primitive, but easily the most versatile. You may reduce sounds to mono to take clutter out of a mix without reducing bass/increase the presence of high end without increasing the volume. You can turn the rate off on a phaser and separate each pole/phase individually. You can take 2 unique recordings and pan them into the left and right channel respectively to create an artificial unison.

Unison: is not exclusive to synthesizers, but is most commonly associated with them. Flangers and chorus create false unison using delay. Synths create true unison by playing multiple instances of an oscillator out of phase, often in stereo. A unison mix may then be used like a volume knob.

Stereo Separation Width Depending on the function, width is nothing more than a saturation of the stereo field. Best rule of thumb is increase stereo saturation as we increase in pitch.



Phase




Phase




Phase