Filters, Reverb & Effects

Guitarist Gustavo Martins sits on a stool playing a gold-colored electric guitar. His right foot is operating one of sixteen pedal units, most of which are in an open case on the wooden floor. Brazilian guitarist Gustavo Martins playing electric guitar in a studio. Pedal units like the ones in this photo can add effects to the guitar's tone, creating new timbres and other special effects.

Electric and electronic devices can be used to modify an audio signal, either in a live performance situation or as part of the process of editing and mastering a recording.

Filtering and Compression

Certain audio effects can be achieved by modifying only portions of the signal with little or no effect on the remaining portions.

Frequency Filters

Filters are audio components which only affect certain frequencies of sound. A filter can boost a range of frequencies by increasing their amplitude, attenuate them by decreasing their amplitude, or passing them through without modification.

Low-pass Filters

A low-pass filter reduces the volume of sounds above a specific frequency, allowing only lower frequencies to pass through. Low-pass filters are often used with woofers, speakers designed specifically for lower frequencies, and subwoofers, speakers designed for frequencies at the lower limits of hearing.

Figure 1: Hanahaki (Bloom), a 2019 song by American singer/songwriter Molly Ofgeography. The second half of the song's bridge features a syncopated synthesizer motive used throughout the rest of the song but applies a low-pass filter, creating a restrained effect that causes tension leading back to the chorus.

High-pass Filters

Similarly, a high-pass filter reduces the volume of sounds below a given frequencing, allowing higher frequencies to pass unaffected. High-pass filters are used to drive tweeters, speakers designed specifically for higher frequencies.

Band-pass Filters

A band-pass filter combines the functionality of low-pass and high-pass filters, supressing frequencies which fall above or below a specific range or band of frequencies. Some band-pass filters allow the user to specify high and low frequency limits; others involve setting a center frequency and a size for the band around that frequency.

Audio utilities like intercoms and telephones often use band-pass filtering. This is sometimes intentional, as a way of reducing the amount of digital information to be sent, or a natural result of using less-expensive, limited-frequency speakers. Because these systems are designed primarily for speech, users often do not notice the the limitation in fidelity.

Band-stop Filters

In contrast, a band-stop filter — also sometimes called a band-rejection filter or notch filter — supresses frequencies within a specific range. These filters are useful in removing hums, noises which have a specific, consistent frequency, which are sometimes generated by electrical interference or low-level feedback.

Figure 3: Applying filters to an audio signal. Like acoustic filters, these digital filters do not completely block filtered frequencies, but reduces their volume.
(Excerpted from Anthem of Rain | CC BY-4.0)

Filter Sweep

A filter sweep is an effect common in EDM and other popular styles where the frequency setting of a filter is gradually changed. The effect is most commonly involves using a low-pass or band-pass filter to increase the sound's frequency spectrum as a way of building tension in anticipation of a drop.

Measures 98 through 114 of `Till The World Ends,` showing the vocal line, chords and the filter setting. The audio is unfiltered until measure 103, where a lowpass filter is applied to the point of silencing all the audio. From measures 104 to 112, a slow filter sweep is used to open the filter in preparation for the drop at measure 113.
Figure 4: Measures 98–114 of American singer/songwriter Britney Spears' 2011 song Till The World Ends, showing the use of a lowpass filter which is rapidly engaged and then slowly released over eight measures.

Graphic Equalizers

A graphic equalizer is a component which includes a series of faders, each one set to control the amplitude of a specific narrow frequency range. Equalization describes the process audio professionals use to counteract limitations or idiosyncracies of specific speakers or sound systems independently of specific performances or recordings. Consumer audio equipment will sometimes include various built-in equalizer presets to cater to different listener preferences.

A Yamaha graphic equalizer sitting on a wooden table. The black metal unit is about 17 inches wide, four inches tall and nine inches deep, and the front panel features an amber LED display, eight buttons and vertical sliders: ten sliders for the left channel, ten for the right sliders, and one master output.
Figure 5: A Yamaha EQ-500 graphic equalizer unit.
(Detail from touhotus | CC BY-2.0)

Compression

In contrast to filtering audio based on frequency, dynamic range compression modifies sounds of a specific amplitude. Upward compression increases the volume of quieter portions of an audio signal, and downward compression decreases the volume of louder elements.

Figure 6: Applying a compressor to an audio signal. The vertical slider controls the volume reduction performed on the louder portions of the audio, as determined by the threshold controlled by the horizontal slider. Changing the level of compression affects the difference in volume between the guitar and vocals. (In this example, the compressed signal is amplified to match the overall volume of the input.)
(Excerpted from Austin Moffa | CC BY-SA-3.0)

Expansion

Compression involves the suppression of dynamic extremes; expansion is the opposite effect. Upward expansion increases the volume of the loudest parts of the audio, and downward expansion lowers the volume of the quietest parts. An extreme example of downward expansion is a noise gate, which reduces sounds below a certain volume completely.

Knees

Some compression units allow the user to specify how gradually compression is applied around the specific frequency. This setting is called the knee based on how the transition is portrayed in compression graphs; a hard knee represents a sudden change in how the compression is applied, while a soft knee involves a more gradual transition.

Two compression graphs, both of which have `input volume` on the horizontal axis and `output volume` on the vertical axis. In both graphs, a red line starting in the lower left corner traces a one to one correlation until reaching a threshold line in the center, where it changes to a much lower slope for the right side of the graph. The left-hand graph shows a sudden change in slope at the threshold line, and is labeled as a `hard knee.` The right hand graph shows a gradual curve to change slope, and is labeled as a `soft knee.`
Figure 7: The degree to which compression is applied is known as the "knee" reflecting the imagery in compression diagrams.

Common Uses

In addition to its use in music performance and recording, compression has other common application. Sound systems in public areas like restaurants and retailers often use compression to prevent quieter portions of recorded music from being covered by ambient noise in the environment. Hearing aids use compression to amplify quieter sounds for the user's benefit while keeping louder sounds at a comfortable level.

Reverb

Reverberation is a characteristic of all sound, and managing or controlling it can help make sound clearer and more satisfying.

Natural Reverberation

When a sound is created, we hear soundwaves that travel directly from the source to our ears. However, sound is emitted in all directions, so we may also be hearing sounds which reach our ears after reflecting off nearby or distant surfaces.

Within a small room, these reflected soundwaves will likely reach our ears at the same time as the directly transmitted soundwaves. In a larger space, however, the reflected sounds may reach our ears slightly later.

An echo occurs when soundwaves are reflected back to the listener all at once, perhaps from a single, large wall. In most large rooms, however, soundwaves reflect off surfaces at many different distances. The result is a gradual arrival of reflected soundwaves which is called reverberation.

Large spaces with many different, highly reflective surfaces — for example, stone cathedrals — will create significant amounts of reverberation, causing musical performances to sound muddy and imprecise. A space which has been designed specifically by an acoustician to manage reverberation in a way that benefits listeners and performers is called an auditorium.

A photograph of a large auditorium. The stage, which is filled with a symphony orchestra is performing with a full choir behind them, is set in front of a large archway with windows looking into a dark environment. On either side of the stage are two levels of box seating; on one side, one of the segments of box seating is replaced with several ranks of organ pipes. Regular seating in the auditorium appears to be filled to capacity with audience members.
Figure 9: An evening concert in the Alfredo Kraus Auditorium in Las Palmas, Spain. The auditorium, located in the Canary Islands and designed by Spanish architect Óscar Tusquets, features box seating, a pipe organ, and a large bank of windows behind the stage that overlooks the Atlantic Ocean. Panels hung from the ceiling provide acoustic redirection for the lighting situated above them.

Recording Studios

While auditoriums and other performing environments are designed around controlling and enhancing reverberation, modern recording studios are designed to suppress it with absorption panels and soundproofing. This is so recording engineers can begin with a flat, unaffected signal and add reverberation electronically.

Musicians who are new to performing in a recording studio often find themselves unconsciously playing or singing louder to account for the lack of reverberation. To counteract this, the recording engineer will often broadcast reverb-enhanced live audio into the musicians' headphones while they are performing.

A photograph of Maika in a vocal recording booth, wearing headphones and smiling, apparently ready to sing into a microphone hung in front of her.
Figure 10: Japanese singer/songwriter Maika recording a vocal performance in a recording studio in Nishihara, Japan. Studios often provide separate booths for recording vocalists, and provide the reverberation performers normally hear from a room through a set of headphones.

Artificial Reverb

Musicians can reproduce the aural effects of reverberation using electric and electronic components. When this effect is created artificially, performers and audio engineers often use the abbreviated term reverb.

Delay

Delay involves buffering audio and playing it slightly later than the source sound. The delayed audio can be added to the original audio to create a thicker, more resonant sound. This effect is commonly heard in rockabilly recordings of the 1950s like American singer-songwriters Carl Perkins and Buddy Holly.

Echo

When an additional signal is delayed enough to become aurally separate from the original — often around 0.3 seconds or more — it is considered an echo. Echo can be used as a reverb effect to emulate the sound in a larger room, or it can be used as a more stylized musical technique. In rhythmic music, the delay length of an echo can either emphasize or conflict with the overall meter.

Multitap Delay

By creating multiple delayed copies of the signal, played at specific delays and with individual volumes, rhythmic effects can be created or reinforced. This technique, called multitap delay, is common in EDM and other dance-related styles.

Effects

Many other types of effects can be found in audio recording and live performance, stemming from combining basic acoustic techniques or enlisting powerful digital algorithms.

Chorus

A chorus effect is created by combing an audio signal with numerous copies of itself with slight variations in timing and pitch. The resulting effect simulates a large group of performers.

LFO

Effects like chorus and delay create an effect that is consistent. Other effects ebb and flow periodically, creating a more expressive result. In analog synthesizers, this cycling change is driven by the use of a second oscillator which is not used to create sound but to change a value of another component. Because they oscillate much more slowly than those used to generate pitches, these secondary units are called low-frequency oscillators or LFOs.

Common LFO-based effects include:

  • Vibrato: the use of an LFO to change the frequency of a sound.
  • Tremolo: the use of an LFO to change the amplitude of a sound.
  • Flanger: the use of a second copy of the audio, with an LFO controlling how much it is delayed from the original.
  • Phaser: the use of a second copy of the audio, with an LFO controlling the phase of the duplicate against the original.

Pitch-Shifting

Analog tools can only change pitch by changing the playback speed of recordings. Using digital sampling, modern tools can alter pitch independently of speed, allowing the effect to be applied to live performance. Pitch-shifting is sometimes used as an effect ranging from subtle changes to more intensive distortion of audio.

Figure 11: Seven Nation Army, a 2003 song by American rock duo The White Stripes. As a means of maintaining a self-imposed limit of only three parts per song — in this case, vocals, electric guitar and drums — guitarist Jack White used a pitch-shift pedal to play the opening bass line on his electric guitar.

Autotune

Pitch correction can be applied to individual audio tracks, either specifically or automatically, using digital pitch-shifting tools. This effect is generally called autotune after the 1997 proprietary software Auto-Tune by Antares Audio Technologies.

Pitch correction is used very commonly in the studio process to correct imperfections in recorded performances in a way that the effect itself is not obvious. Autotune can also be used as an audible pitch-shifting effect, creating an unnatural pitch precision commonly used with vocal tracks.

Measures 24 through 28 of `Believe,` showing the vocal line and chords. The melismas on the lyrics `sad` and `time` are autotuned.
Figure 12: Measures 24–28 of American singer Cher's 1998 song Believe. The track is commonly recognized as the first notable use of autotune as a deliberate vocal effect, rather than a transparently corrective tool.

Vocoder

A vocoder is a device that combines inputs from two sources: a tone producing device like a synthesizer, and a microphone. The input from the microphone is analyzed by the vocoder and used as an envelope, replacing the fundamental sounds produced by the vocal cords with the sound of the tone producer.

A diagram showing the arrangement of sound equipment used by Imogen Heap in `Hide and Seek.` There are five components: a microphone, a synthesizer keyboard, a vocoder, a mixer, and a speaker representing a sound system. The synthesizer keyboard is connected to the signal input of the vocoder, the vocoder is connected to the mixer, and the mixer is connected to the sound system. The microphone is connected both to the envelope input of the vocoder and directly to the mixer.
Figure 13: A diagram showing the components used by British singer/songwriter Imogen Heap in her 2005 song Hide and Seek. The song, which is often performed as a solo using a keytar, allows Heap's natural voice to be accompanied by a virtual "choir" generated by the vocoder.

Filters, Reverb & Effects: Summary

  • Filters suppress, or attenuate, ranges of frequencies in an audio signal.
    • A low-pass filter attenuates sounds above a specified frequency.
    • A high-pass filter attenuates sounds below a specified frequency.
    • A band-pass filter attenuates sounds above and below a specified frequency range.
    • A band-stop filter attenuates sounds within a specified frequency. These are also called band-reject filters or notch filters.
    • A filter sweep is the use of a filter while changing the frequency threshold.
  • A graphic equalizer is a component which allows the amplification or attenuation of specific frequency ranges, often used to account for idiosyncracies of particular speakers or sound systems.
  • Compression is the reduction of an audio signal's dynamic range by reducing the volume of loud portions and/or increasing the volume of quiet portions.
    • Expansion is the opposite of compression, and involves increasing the dynamic range of an audio signal.
    • A compressor's knee refers to the transition from unaffected audio into the dynamic range affected by the compressor.
  • Reverberation occurs when sound is reflected from nearby surfaces. Larger spaces like auditoriums will often have more reverberation, adding a slight delay to the resulting sound.
  • When the effect of reverberation is created artificially, the effect is generally referred to as reverb.
    • Delay involves playing a copy of the audio signal slightly later than the original, creating a more resonant sound.
    • Echo involves a copy of the signal at a large enough delay to aurally separate the two sounds.
    • Multitap delay is the use of multiple, evenly-spaced echos as a means of creating a rhythmic effect.
  • Chorus is the addition of multiple copies of the audio signal with slight variations in timing and pitch.
  • LFO stands for low-frequency oscillator, a second oscillator used to cyclically change the setting of another component.
    • Vibrato is created by using an LFO to change the frequency or pitch of a sound.
    • Tremolo is created by using an LFO to change the amplitude or volume of a sound.
    • A flanger is the use of an LFO to change the delay of a second copy of the original audio signal.
    • A phaser is the use of an LFO to change the phase of a second copy of the original audio signal.
  • Digital pitch-shifting can be used as an audio effect in recorded or live performances.
    • Selective pitch-shifting for the purpose of correcting imprecise pitches, especially in vocal tracks, is called autotune.
    • While originally intended to be a transparent, corrective tool, autotune is commonly used as a deliberate vocal effect.

Exercises

Exercise 1: Applying Effects to Audio