is how long until a sound reaches the peak of the envelope. This envelope may be used to shape pitch, volume, or modulate other parameters set by the user. Attack is triggered every time a note is pressed, and may be bypassed with a legato mode.
is how long until the envelope travels from point A (max) to point C (sustain). Decay is optional, but it and attack are triggered with every note keyed.
is the destination. On an envelope, sustain represents the 3rd node.
is everything that happens after the note is released.
The Harmonic Series is made up of fractions on the fundemental level. As a rule, you may add +1 note for every octave
from 0 up.
Octave 0 (sub) takes up so much amplitude that it only has room for 1 note. This is why sub bass is monophonic in
european derived music. If you do not stick to this guideline, your sub will oscilate as it phases with accompanying
notes.
In octave 1 (bass), it is in your best interest to stick to 5ths (7 semitones apart). E and B, D and A and so on. The circle of 5ths creates something similar to a double helix. An easier way to visualize this is to call back to our +1 rule. Octave 1 now allows for notes to share 1/2 of the space.
Octave 2 (lower mid) now allows for 1/3rd of space without notes clashing. Just like how we had to split our 12 tone scale into 7 steps apart, 1/3rd would allow us to savely use - i will be fact checking this later
For octave 3 (mids), the +1 rule begins to disolve. Notes spaced 1 semitone apart now work in the context of larger chords. You can layer up from the sub/0 octave using the afformentioned tools on real instruments or in orchestrated pieces with no issues. Key takeaways for the 3rd octave are that all prevelant chords often take place here in middle C or around a4/440hz. Your dominants, flats, augmented, relative, dimishinished, minor, major, etc. will sound right
a harmonic series will tell you about overtones, organic treble bleed, where your strongest harmonic lies (if not the fundemental), the peaks and troughs in your (in)audible spectrum, and how best to use any waveform. I will never encourage you to use a saw wave because it sounds like a saw wave, but rather because saws have a built in cutoff and make your mids pop with less effort. Same goes for using a sine wave. If you need more bass, a sine wave will be loudest. A sine wave can also suggest the dominant note of your harmonies. You may place a single note sine behind any chord where you want to suggest a moody E in place of a prominent F# of a B Major. You may also use a sine to shape the timbre of your singer by backing their lead melody.
Wideness is Loudness.
Width is nothing more than a saturation of the stereo field. There are a few methods to achieve stereo
depth and they are often some form of delay, panning, or unison(phase). Best rule of thumb is increase
stereo saturation as we increase in pitch.
Sound Good In Monophonic
A good stereo field will have positive stereo correlation, or mono-compatible. When a sound does not sound good in mono, it will sound bad on phones, quiet in clubs, sicken or harm festival attendees, and generally just sound bad in the car.
is the most basic, but easily the most versatile form of stereo control. You may reduce sounds to mono to take clutter out of a mix without reducing bass/increase the presence of high end without increasing the volume. You can turn the rate off on a phaser and separate each pole/phase individually. You can take 2 unique recordings and pan them into the left and right channel respectively to create true stereo depth. This also applies to analog synths.
creates true stereo seperation via analog or acoustic environments. The randomness of every recording provides a smoother texture to your stereo field and is most commonly practiced with vocal harmonies, string instruments, and
is not exclusive to synthesizers, but is most commonly associated with them. Flangers and chorus create false unison using delay. Synths create true unison by playing multiple instances of an oscillator out of phase, often in stereo. A unison mix may then be used like a volume knob.
Haas effect is any amount of separation of the left and right channel by/up to 20ms. This stereo delay creates an artificial sense of depth which may be desirable for mono-compatibility.
Chorus operates between 20-50ms and affects the pitch and latency of many stereo copies of the input signal.
Flange begins at 50ms and smears the delay along a modulation cycle. Some units offer a gooey positive filter or a metallic negative filter to bias your signal.
is a technology that uses an Impulse Response to create a reverb with no release. Impulse responses are wave files that emulate a snapshot of a room, and the accompanying stereo data. IRs consequently have inconsistent mono compatibility.
Convolution and IRs are required for authentic guitar sounds because distortion doesnt make the guitar. Guitars sound like the speaker a guitar is run through. To emulate common guitar sounds, an sm57 inspired comb filter may be used alongside haas to replicate an XY polarity. This is then paired with classic speakers like the Mesa Boogie 4x12 with their 4 uniquely imperfect Celestion Vintage 30s or a Marshall 1960, which also runs celestion cones. These had stock G12T-75s (12", 75w per speaker) wired in a parallel series.
Will add more later.
means many things. As an all-purpose definition, phase is position. Phase is your position in your stereo field, spectral field, and in time. A good phase heavily relies on you to abstain from canceling out other waveforms. Phase Cancelation will occur when any waveforms cross in your mix and invert.
There is a good chance that any sound you're looking for can be summed up as Resonance. Resonance in the bass, resonance on the top end, etc. By biasing your resonant frequencies, you can control a lot of the sound with little effort. This is easiest in squares which are harmonically rich. This can also be done in white noise because it has the same quality of harmonic content, but no specific frequency.
If your oscillator has a static phase, you may set it in any of 360 degrees from its starting position. Depending on where your waveform starts, it may increase or decrease the amplitude of your output signal.
Stereo phasing, or mono compatibility, is commonly addressed is because a pair of speakers will mute each other when played off axis or with a delay. It is normal for a stereo pair of speakers to play perfectly out of phase and reduce the amplitude of an instrument or master output. To prevent phase cancelation, we use a specific set of mixing techniques. Unison on synths creates a carousel of duplicate waveforms starting at different points in a cycle. For analog, we record multiple unique takes for instruments or vocals, panning them hard left and right. More common techniques are desynced delay (haas effect), reverb, and manual panning.
The main way to prevent stereo phasing in a live setting is by controlling your mono compatibility. If your mix sounds as good in mono as it does in stereo, it will take a lot of effort to ruin your music in a live setting Lastly, a phase is a sine. all waveforms consist of inumerable sines. Equalizers use additive and subtractive phasing to change the spectral response of your signal. This in turn affects phasers, flangers, and other effects ranging from static filter sweeps to a plain bell curve. Once you understand this, audio processing becomes simpler.
Taking two phases "centimeters" out of tune with eachother creates low frequency oscilation. This can be used on two sub basses to create a rolling vibration or to replicate the classic guitar bend squeal.
Increasing the resonance of filters (includes flanger, chorus, phaser) allows you to reharmonize or sweep existing harmonic content with more aggressive tonality.
is the act of normalizing a sound. Your noise floor/ceiling are set by the user or compressor. After your dynamic range is set, your signal is averaged using multiples. 10x would lead to very flat, quiet sound. 2x would lead to a dynamic, but volatile waveform.
It is best to use only as much compression as needed and saturate the rest. Multiple compressors in parallel will rarely benefit the end user unless the goal is to set multiple back to back and create an infinite decay or an overly noisy sound.
is the factor in which a compressor averages an audio signal. It is not uncommon for high quality compressors to begin at 1:1.1, or 1x1.1 instances of input signal. as we reach 1:2, or 1x2 instances of input signal, your quiet sounds will be 2x louder than before while your loudest sound will be half as loud, all relative to the average volume of your input signal
is a goal post. Any input after the threshold will be reduced however many times your ratio is set to. Volume may exceed the threshold, but only after surpassing so many decibles of gain per magnitude of ratio.
is vital to compression. It allows the user to determine what noise is undesirable while also preserving the most dynamics.
is the amount of time a compressor needs to respond to input signal. A short attack creates snappy compression, but may be prone to pops and clicks. A long attack will be more stable, but requires time to meet your compressors output.
Analog compressors used to use tiny lightbulbs known as diodes. The input signal would create voltage. Analog compressors would respond to this using the diode. This phenomenon is also referred to as "bloom".
like attack, functions on an envelope - refer to #adsr. Release plays with attack by holding the compression intensity after input signal has reduced or faded.
Provided this is a feature of your compressor, expansion will focus on boosting your signal up-to and
past your threshold.
Downward is the normal function of a compressor as its explained above. It is not typically a "feature"
of compressors as downward compression is the primary function.
As input signal reaches threshold cieling, signal will be reduced using logarithmic curve. Input signal below threshold will shoot upwards in volume and then taper off as it reaches the knee.
Imagine that the threshold is the rendevouz for the loudest and quietest sounds, while the knee determines how much give the compressor allows.
Looks ahead of your input signal. It will compensate for the bloom affect of your compression. By allowing the compressor
to see what signal is coming before it arrives, you can reduce inconsistencies in the source.
Lookahead works both with and against attack depending on your usage.[...]
A solid case use would be [eq>comp>vocoder>delay>comp>eq>saturator]. The reason for this is vocoders tend to make sounds much darker than before and also make your signal quieter if not inaudible. Compressing again average out your high end and low end and make the audible spectrum balanced once more while retaining your mids, depending on how aggressive your compression ratio is.
are much like rectangles and squares. All distortion is saturation, and all saturation
distorts. The catch - Not all distortion will make a sound louder.
Saturation specifically comes from pushing harmonic content past its peak, a.k.a. clipping. This functions much closer to fader gain/output gain than it does distortion. While distortion will always happen after signal passes 0db, saturation will catch peaked harmonics on the way up and turn clipped signal into something more musical.
Distortion is what happens when a waveform is altered to create something that doesnt exist. Common methods of distortion are overdrive, downsampling, clipping, and waveshaping.
in treble, distortion creates pink to white noise depending on the intensity of the high end. In bass, it will create humming and eventually the classic squared out sound.
When you combine sub distortion with high end, it will make humming. With mids, it creates gargling and cracking. You can bias the sub distortion to introduce a certain warmth that is difficult to create through any other means.
Most distortion created by sub bass is due to the sub being louder than everything else.
Distortion is the product of pushing signal gain into a waveshaping unit. Waveshaping is a product of Phase. If you keep your wave shape simple, it will "square-out" your input signal and in essence, compress your signal. The output signal will have more peak amplitude over time, creating a louder output. When used sparingly, this is called saturation.
is a term that is poorly defined. The act of clipping itself is not up to interpretation. In the context of live audio, clipping is objectively bad. Analog clipping rips speaker coness and overloads hardware, which causes heat damage.
In digital audio, clipping commonly happens before a sound reaches your master limiter. You can prevent this in the mixer by turning the output of your channels down, limiting in groups, and mixing quiet. As you reach for louder volumes, you will have to get more creative. Understand that if the master out doesn't clip, your DAW doesn't clip, and your master.wav does not clip.
If you limit your project, the clipping converts to distortion.
The confusion then comes from a misunderstanding of distortion. Your signal can go red for a number of reasons. If you didn't mean to go red and it sounds bad, the distortion is bad. No argument there. If the signal has gone red and it sounds good, keep it or figure out ways to recreate it intentionally with waveshaping and bias with EQ
this is too complex of a topic for me rn. check back later. thank you.
No amount of dynamic range can make your music compete with noisia or svdden death. You have to come to the playing field prepared to make sacrifices, whether sidechaining your non-percussion elements into other basses, manually comb filtering your basses to make room for layered basses, or simply brick-walling groups with a saturator, then a limiter. If you prefer your mixdown to have cozier transients (dynamic range), remove some compression and distortion. It will change the sound of your mix, taking you as far away from an aggressive sound as one can get.
Need dynamic range? Turn everything down and push the output on your limiter. Headroom may be lost when gainstaging is this aggressive, but its not a real issue since audio codecs do not actually "turn down" your audio. It is simply compressed/normalized to the loudest average. If you listen to songs on soundcloud and again on youtube, loud music is loud because the master is loud.
consists of your noise floor and noise ceiling. The noise floor is often below -60db on analog devices, but may reach -inf on digital devices. Your absolute noise ceiling will always be -0db. If you wish to lower your noise ceiling, this setting would be called threshold.
is a concept that applies to all generative sources. This can be a microphone or a synth. The noise floor is variable
and can consist of/include feedback from electrical interference, mechanical/digital artifacts, and other unintentional
sources of noise.
In reference to gating or compression, the noise floor is a cutoff for the lowest source of input. This is vital in
compression because compressors average the min/max of your signal.
Similar to threshold, noise cieling is the absolute max volume analog devices can record at.[...]
external explanation this site is not affiliated. the explanation is solid.