Finalizing audio as a mastering engineer
Here I will talk about my knowledge on mastering audio files, which might come in handy mastering music.
A mastering engineer finalizes the mix by making subtle adjustments and enhancements to the overall sound of the mix to make it sound more polished, balanced, and cohesive. Here are some examples of what a mastering engineer might do to finalize the mix:
Equalization (EQ): The mastering engineer might use EQ to balance out the frequency spectrum of the mix, making sure that all the different instruments and sounds in the mix are sitting well together and not clashing with each other.
Typically, a mastering engineer will use EQ on the overall mix rather than individual channels. This is because the mastering engineer is working with a stereo mixdown rather than individual tracks. However, some mastering engineers may use EQ on individual elements within the mix, such as vocals or drums, if necessary.
When using EQ during mastering, a mastering engineer will often use a combination of broad and narrow EQ curves to adjust the frequency balance. Broad EQ curves are used to make larger adjustments to the overall balance of the mix, while narrow EQ curves are used to make more precise adjustments to specific frequencies.
For example, if the mix sounds overly bright, the mastering engineer may use a broad EQ curve to reduce the high frequencies. Alternatively, if the mix sounds muddy or lacks clarity, the mastering engineer may use a narrow EQ curve to boost the mid frequencies and bring out more detail in the mix.
Compression: The mastering engineer might use compression to control the dynamic range of the mix, making sure that the loudest parts of the mix aren’t too loud and the quietest parts aren’t too quiet.
A mastering engineer can use compression to control the dynamic range of a mix, making it more consistent and balanced. This can be achieved by applying varying degrees of compression to different parts of the frequency spectrum. For example, a mastering engineer may apply light compression to the low-end frequencies to help tighten up the bass and give it more punch, while using heavier compression on the mid-range to help bring out the vocals or other important instruments.
The mastering engineer may also use a technique called multi-band compression, which involves splitting the audio signal into several frequency bands and applying different levels of compression to each band. This allows for more precise control over the dynamic range of the mix, as each frequency range can be treated individually.
It’s important to note that the goal of compression during mastering is not to make the mix as loud as possible, but rather to control the dynamic range and create a balanced and consistent sound. Overuse of compression can lead to a loss of dynamic range and can actually make the mix sound worse, so it’s important for the mastering engineer to use compression judiciously and with a careful ear.
Stereo imaging: The mastering engineer might use stereo imaging techniques to widen or narrow the stereo field of the mix, making it sound more spacious and immersive.
- Stereo imaging is an important aspect of the mastering process, as it can affect the perceived width and depth of the mix. A mastering engineer may use various techniques to adjust the stereo image of a mix, such as panning, stereo widening or narrowing, and mid-side processing.
- Panning involves adjusting the balance of sound between the left and right channels, which can create a wider or narrower stereo image. For example, if a mastering engineer wants to widen the stereo image, they might pan certain elements slightly to the left or right.
- Stereo widening and narrowing can be achieved using stereo imaging plugins, which adjust the phase relationship between the left and right channels. These plugins can make the mix sound wider or narrower by adjusting the timing and level of the left and right signals.
- Mid-side processing involves separating the mix into two components: the “mid” component (the sum of the left and right signals) and the “side” component (the difference between the left and right signals). The mastering engineer can then adjust the level and EQ of the mid and side components separately to create a more balanced and spacious stereo image.
Volume balancing: The mastering engineer might adjust the volume levels of individual tracks within the mix to make sure they all sound balanced and cohesive.
Volume balancing is a critical component of mastering, and a mastering engineer will use it to ensure that the overall volume of the track is consistent throughout, without any sudden drops or increases in loudness. The engineer will also balance the volume of individual instruments or elements of the mix to make sure that they all sit well together in the stereo image.
To achieve this, the mastering engineer will typically use a combination of volume automation, gain adjustment, and possibly some dynamic processing techniques such as compression or limiting. The goal is to achieve a balanced, cohesive mix that sounds great across a variety of playback systems, without any parts of the track overpowering others.
The mastering engineer will also ensure that the overall loudness of the track is appropriate for the intended medium or distribution platform, taking into account factors such as streaming loudness normalization or the limitations of physical media like vinyl or CD. This may involve using various metering tools and loudness measurement standards to ensure that the final mix is compliant with industry best practices and standards.
Harmonic enhancement: The mastering engineer might use harmonic enhancement techniques to add warmth and richness to the mix, making it sound more vibrant and dynamic.
Harmonic enhancement is a technique used by mastering engineers to add harmonics to a track and give it a fuller sound. This is achieved through the use of various types of processing, including saturation, harmonic exciters, and multiband distortion. The goal of harmonic enhancement is to create a more vibrant and exciting sound by adding subtle harmonics to the audio.
One way a mastering engineer might use harmonic enhancement is by applying a subtle amount of saturation to the mix. This can add warmth and richness to the sound without changing the overall tonality too much. Harmonic exciters, on the other hand, add higher-order harmonics to the sound, making it brighter and livelier. Multiband distortion can be used to selectively add harmonics to different frequency bands, allowing the engineer to fine-tune the sound of the mix.
It’s important to note that harmonic enhancement should be used sparingly, as too much processing can result in a distorted and unpleasant sound. The mastering engineer must use their experience and judgment to determine the appropriate level of harmonic enhancement for each mix.
To adjust the loudness of a mix, a mastering engineer may use a loudness meter to measure the integrated loudness of the audio, often using the LUFS (Loudness Units Full Scale) scale. The engineer may aim for a specific loudness target based on the intended delivery format and the genre of music.
There are several ways to measure loudness in audio files. Some common methods used by mastering engineers include:
Peak Level: This measures the highest point of the audio signal, which can be a useful indicator of potential clipping or distortion. However, it does not provide an accurate representation of perceived loudness.
RMS Level: This measures the average loudness of the audio signal over time. It is a more accurate indicator of perceived loudness than peak level, as it takes into account the dynamics of the audio signal.
Loudness Units (LU): This is a standardized method of measuring loudness that takes into account human perception of sound. It measures loudness in units called “Loudness Units” (LU), which are based on the “Fletcher-Munson” curves that describe how the human ear perceives sound at different frequencies and levels.
Integrated Loudness (LUFS): This measures the average loudness of the entire audio file, taking into account both the short-term and long-term dynamics of the audio signal. It is a widely accepted standard for measuring loudness and is often used for broadcast and streaming services.
A compressor can be used to limit the track’s loudness to a specific LUFS level. LUFS (Loudness Units Full Scale) is a standardized measurement of loudness used in audio production to ensure that audio levels are consistent across different platforms and media. To limit the track’s loudness to a specific LUFS, the mastering engineer would use a limiter, which is a type of compressor that is specifically designed to prevent the audio signal from exceeding a certain threshold.
First analyze the audio file to determine its current LUFS level. Then adjust the limiter’s threshold and ratio settings to bring the overall loudness of the track to the desired LUFS level. The ratio setting determines how much the limiter reduces the signal once it reaches the threshold, and the threshold setting determines the level at which the limiter kicks in.
It is important to use a limiter carefully to avoid over-compression and loss of dynamic range, which can result in a flat and lifeless sound. Take into account the genre of music and the intended listening environment when setting the target LUFS level, as different genres and playback environments may require different levels of loudness.
When sequencing tracks, a mastering engineer must consider a range of technical and artistic factors to create a cohesive and engaging listening experience. This includes determining the appropriate order of tracks, adjusting levels and EQ, and adding fades or other effects as needed.
To start, the engineer will typically listen to each track individually to assess its strengths and weaknesses, as well as its overall sound and mood. They may then adjust levels and EQ as needed to create a more consistent sound across all tracks, ensuring that each track sounds balanced and cohesive when played back-to-back.
Once the engineer has fine-tuned each individual track, they will turn their attention to sequencing. This involves deciding on the order in which the tracks will appear on the album or EP, and making any necessary adjustments to ensure that the transition between tracks is smooth and seamless.
To achieve this, the engineer may use various techniques, such as adjusting the levels and EQ of adjacent tracks to create a more natural and fluid transition, or adding fades or crossfades between tracks to create a more subtle and gradual shift.
Other considerations when sequencing tracks include the overall pacing of the album, the mood and energy of each individual track, and the listener’s experience as a whole. A skilled mastering engineer will use their expertise and knowledge of the genre and the music industry to create a final sequence that not only sounds great, but also tells a compelling story and engages the listener from beginning to end.
Prepare for distribution
Once the mastering engineer has finished the final mastering process, the next step is to prepare the final master for distribution. This involves creating a master format that is suitable for the specific distribution channels that the music will be released on. For example, if the music will be distributed on vinyl, the final master may need to be prepared with a specific EQ curve and level for vinyl cutting. Similarly, if the music will be released on streaming platforms, the final master may need to be optimized for loudness normalization.
Streaming services: When preparing a finalized track for streaming services such as Spotify, a mastering engineer would need to ensure that the track meets the platform’s specific technical requirements. For example, Spotify uses loudness normalization to ensure consistent playback across different tracks, so the mastering engineer would need to use LUFS (Loudness Units Full Scale) metering to measure the loudness of the track and adjust the overall level accordingly.
The mastering engineer would also need to ensure that the track is delivered in the appropriate file format and quality. Spotify recommends using lossy compressed audio in the Ogg Vorbis format, with a bit rate of at least 96 kbps for mobile streaming and 160 kbps for desktop streaming. The engineer would need to export the final master in the appropriate format and ensure that any metadata, such as song title and artist information, is properly embedded in the file.
CD: When preparing a finalized track for CD, a mastering engineer needs to consider several factors to ensure the highest quality of the audio. Here are some of the activities that a mastering engineer might do:
Redbook Standard: The engineer must ensure that the track conforms to the Redbook standard for CD audio, which specifies a 16-bit word length, a sampling rate of 44.1 kHz, and two channels of audio.
PQ Codes: The engineer must create PQ codes (Program and Control codes) that indicate the start and end times of each track, the gaps between tracks, and other information relevant to the CD.
Volume Balancing: The engineer must ensure that the volume of each track is balanced so that the listener does not have to adjust the volume between tracks.
Error Checking: The engineer must perform error checking on the CD to ensure that there are no errors or glitches in the audio.
ISRC Codes: The engineer must embed ISRC (International Standard Recording Code) codes into the CD so that the tracks can be properly identified for royalty and copyright purposes.
CD-Text: The engineer may add CD-Text, which is a format for storing text information about the CD, including track titles, artist names, and album titles.
Dithering: The engineer may apply dithering to the audio to reduce quantization distortion that can occur when the bit depth is reduced from 24-bit to 16-bit.
Final Check: The engineer will do a final check to ensure that the final master sounds as intended, with the appropriate level, EQ, compression, and other effects applied.
Vinyl: When preparing a finalized track for vinyl, a mastering engineer needs to take into account the physical properties of the needle and how it interacts with the vinyl record. The needle, or stylus, is a tiny piece of metal that sits at the end of the tonearm and runs through the grooves of the record. As the needle moves through the grooves, it vibrates, creating the sound that is then amplified and sent to the speakers.
One of the key considerations for vinyl mastering is the dynamic range of the recording. Vinyl has a limited dynamic range compared to digital formats, so the mastering engineer needs to ensure that the loudest parts of the recording do not cause the needle to skip or distort. This may involve using compression or limiting to reduce the dynamic range of the recording.
Another consideration is the frequency response of the vinyl format. Vinyl has limitations in the high and low frequency ranges, so the mastering engineer may need to adjust the EQ of the recording to ensure that the high frequencies do not cause distortion or excessive surface noise, and that the low frequencies are not too pronounced and cause the needle to skip.
In addition, the mastering engineer needs to ensure that the spacing between the grooves is sufficient to accommodate the audio, and that the grooves are properly cut to avoid issues such as inner groove distortion or crosstalk between adjacent tracks.
Once the audio has been optimized for vinyl, the mastering engineer may create a test pressing to listen for any potential issues and make any necessary adjustments before the final pressing. The final master for vinyl may also need to be cut in a specific format, such as half-speed mastering, to ensure the highest possible audio quality on the finished product.
Cassette: When preparing a finalized track for cassette, a mastering engineer needs to take into account the unique physical properties of tape, which are different from those of digital media.
Firstly, the mastering engineer needs to consider the frequency response of the cassette tape, which is generally limited to around 20 Hz to 20 kHz, with some variations depending on the quality and age of the tape. Therefore, the engineer may need to adjust the EQ to compensate for the frequency limitations of the tape.
Secondly, tape saturation and compression are inherent to the cassette medium. Therefore, the engineer may use analog tape saturation or compression to enhance the sound and make it more pleasing to the listener. However, it is important to use these techniques judiciously, as too much saturation or compression can result in distortion or loss of fidelity.
Thirdly, the mastering engineer needs to consider the noise floor of the cassette tape, which can be relatively high compared to digital media. Therefore, it is important to minimize any noise introduced during the recording or mastering process, such as hum, hiss or other types of interference.
Finally, the mastering engineer may need to adjust the levels of the audio to ensure that it is optimally balanced for cassette playback. This can involve adjusting the overall loudness and volume levels, as well as the panning and stereo image to create the desired effect for the listener.
When preparing a finalized track for cassette, the mastering engineer needs to take into account the unique physical properties of tape, such as the limited frequency response, tape saturation and compression, high noise floor, and adjust the audio accordingly to optimize the listening experience.
Broadcast: When preparing a finalized track for broadcast, a mastering engineer needs to ensure that the audio meets certain technical requirements to comply with broadcasting standards. These requirements include specific levels of loudness, dynamic range, and frequency response.
Broadcast audio is often consumed on a wide range of listening devices, including radios, televisions, and streaming platforms. As a result, the mastering engineer needs to ensure that the audio is mixed and mastered in a way that will translate well across all of these devices.
To prepare a track for broadcast, the mastering engineer may need to use specialized tools and techniques such as multiband compression and limiting to control the dynamic range and ensure that the audio remains within certain loudness limits. They may also need to use EQ to adjust the frequency response of the audio so that it sounds consistent across a range of playback systems.
In addition, the mastering engineer may need to create multiple versions of the audio, each tailored to specific broadcasting formats, such as stereo or surround sound, and different bitrates or sampling rates. This can involve creating alternate versions of the mix that are specifically optimized for different playback systems, such as radios or televisions.
The goal of preparing a finalized track for broadcast is to ensure that the audio sounds consistent and clear across a wide range of playback systems, and complies with the technical requirements and standards of the broadcasting industry.
High-resolution formats: When preparing a finalized track for high-resolution formats, a mastering engineer will typically focus on maintaining the highest possible audio quality. This may involve using a higher sampling rate and bit depth than standard CD quality (44.1 kHz/16-bit), such as 96 kHz/24-bit or 192 kHz/24-bit.
Examples of high-resolution audio systems include SACD (Super Audio CD) and DVD-Audio, as well as digital audio formats like FLAC and WAV that support high sampling rates and bit depths.
In addition to ensuring that the audio meets the technical specifications of the high-resolution format, the mastering engineer may also take into account the playback system and environment. For example, they may apply additional processing or adjustments to account for the characteristics of high-end audio equipment or listening rooms.
A mastering engineer must ensure that the final master is compatible with the technical specifications of the target medium. This includes verifying the bit depth, sample rate, and file format, as well as any other technical requirements that may be specific to the medium.
To ensure compatibility, a mastering engineer may use software tools that allow them to verify the technical specifications of the final master. For example, they may use a digital audio workstation (DAW) to verify the bit depth and sample rate, and to convert the file format if necessary.
They may also use specialized software or hardware to analyze the frequency response and dynamic range of the final master, to ensure that it meets the technical specifications of the target medium. For example, they may use a spectrum analyzer to analyze the frequency response of the final master and make adjustments as needed to ensure that it sounds its best on the target medium.
To ensure compatibility with the target medium’s technical specifications, a mastering engineer needs to follow a few essential steps:
Check the technical requirements: The mastering engineer needs to determine the technical specifications of the target medium, such as the required bit depth, sample rate, file format, and other specific requirements.
Convert the file format: If the final mix is not in the required format, the mastering engineer needs to convert it to the appropriate format.
Set the bit depth and sample rate: The mastering engineer needs to ensure that the bit depth and sample rate of the final master match the technical specifications of the target medium.
Use appropriate dithering: If necessary, the mastering engineer may need to apply dithering to ensure the final master is compatible with the target medium.
Test the final master: The mastering engineer should thoroughly test the final master on various playback systems to ensure it is compatible with the target medium’s technical specifications.