Alan Gardina http://alangardina.com Sound Engineer, Musician, Educator Thu, 01 Sep 2016 21:45:46 +0000 en-US hourly 1 https://wordpress.org/?v=5.0.4 http://alangardina.com/wp-content/uploads/2019/01/cropped-Favicon-32x32.jpg Alan Gardina http://alangardina.com 32 32 Music 185/265B Fall 2016 Syllabus http://alangardina.com/music-185265b-fall-2016-syllabus/ http://alangardina.com/music-185265b-fall-2016-syllabus/#respond Thu, 01 Sep 2016 21:44:20 +0000 http://alangardina.com/?p=430 Read more

]]>
PDF version available HERE

Course: Music 185 & 265-B – Recording Arts Workshop

Location: LAVC Music Building, Room 112

Schedule: Thursdays, 3:30 PM – 6 PM, plus lab time 6 PM – 9 PM

E-Mail: AlanGardina@Gmail.com

Book: Digital Audio Recording 2: Pro Tools & Large Ensembles by Alan Gardina

Available at TheQuimbyHouse.com

Course: Produce, Record, Edit, Mix, and Master a multi-track music project with Pro Tools.

Learning Outcome: Students will know how to develop a music project for the recording studio, assist with & run a multi-track recording session, use advanced audio & MIDI editing techniques in Pro Tools, mix in various styles, and master a song for commercial distribution.

What You Need

  1. Access to a computer with Pro Tools software (version 8 or higher)
  • Standard, Student, or Academic versions of the full program are recommended.
  • The free version, Pro Tools First, is limited, but it can perform the basic functions.
  • For information on Pro Tools system requirements, features, and cost, visit the manufacturer’s website at Avid.com
  1. A portable Hard Drive – NOT a Flash Drive.
  • 7200 RPM standard drive speed or faster: 5400 RPM storage drives are too slow.
  • Firewire/Thunderbolt connectors are fast enough for recording: USB is too slow.
  • The school computers do not have USB 3, so USB connections will be too slow.
  • Format compatible with Mac OSX: any hard drive can be erased and reformatted if necessary.
  1. A pair of Headphones
  • You will need your own pair of headphones in order to use the computers at school.
  • Some interfaces will need a ¼” headphone adapter – bring your own.
  1. A Song for your final project.
  • Must be recorded, edited, mixed, and mastered in Pro Tools.
  • Your completed Pro Tools session is due by the last day of class: December 15, 2016
  • Songwriting, Arranging and Jazz Combos students can use their class projects.
  • Students can either record the lab bands or book time to record their own projects in the studio.

You can start working on your final projects on your own time and at your own pace, but you MUST be finished before the last day of class/finals. Do not wait until the last minute. You will have to turn in and present your own completed – edited and mixed – music project.

The music department’s computer lab is available throughout the semester. You can work on your projects in there so you do not have to fight against the rest of the class for a turn in the control rooms. To use Pro Tools in the lab, you will need to check out an iLok from one of the professors – see the schedule on the computer lab door for more details.

]]>
http://alangardina.com/music-185265b-fall-2016-syllabus/feed/ 0
Music 265 B Syllabus Spring 2016 http://alangardina.com/music-265-b-syllabus-spring-2016/ http://alangardina.com/music-265-b-syllabus-spring-2016/#respond Thu, 11 Feb 2016 16:47:45 +0000 http://alangardina.com/?p=422 Read more

]]>
Course: Music 265-B & Music 185 – Recording Arts Workshop

Location: LAVC Music Building, Room 112

Schedule: Thursdays, 3 PM – 7 PM, plus lab time (TBA)

E-Mail: AlanGardina@Gmail.com

Book: Digital Audio Recording 2: Pro Tools and Large Ensembles – Available at TheQuimbyHouse.com Bookstore

Course: Produce, Record, Edit, Mix, and Master a multi-track music project with Pro Tools.

Learning Outcome: Students will know how to develop a music project for the recording studio, assist with & run a multi-track recording session, use advanced audio & MIDI editing techniques in Pro Tools, mix in various styles, and master a song for commercial distribution.

What You Need

  1. Access to a computer with Pro Tools software (version 8 or higher)
  • Standard, Student, or Academic versions of the full program are recommended.
  • The free version, Pro Tools First, is limited, but it can perform the basic functions.
  • For information on Pro Tools system requirements, features, and cost, visit the manufacturer’s website at Avid.com
  1. A portable Hard Drive – NOT a Flash Drive, compatible with the school computers:
  • 7200 RPM standard drive speed or faster: 5000 RPM storage drives are too slow.
  • Firewire/Thunderbolt connectors are fast enough for recording: USB is too slow.
  • The school computers don’t have USB 3, so USB connections will be too slow.
  1. A pair of Headphones
  • You will need your own pair of headphones to use the computers at school.
  1. A Song for your final project.
  • Must be recorded, edited, mixed, and mastered in Pro Tools.
  • Your completed Pro Tools session is due by the last day of class.
  • Songwriting, Arranging and Jazz Combos students can use their class projects.
  • There will be a recording lab after class. Students can either record the lab bands or book time to record their own projects in the studio.

You can start working on your final projects on your own time and at your own pace, but you MUST be finished before the last day of class/finals. Do not put this off until the last minute. You will have to turn in and present your own completed music project

The music department’s computer lab is available throughout the semester. You can work on your projects in there so you do not have to fight against the rest of the class for a turn in the control rooms. To use Pro Tools in the lab, you will need to check out an iLok from one of the professors – see the schedule on the computer lab door for more details.

We will be covering new material each week. In order to get the most out of our time together in class & in the lab, study the material before you come to class. As we discuss each new phase of the music production process, go home and apply these techniques to your own projects. Treat this class like a professional gig: we are the clients, and you are the producer. You are expected to turn in and present a completed project, not an excuse, at the end of the semester. You have a production schedule to follow, and a deadline to meet. If you want to do well, then this project will take a lot of time and effort on your part. Plan ahead and give yourself plenty of time to develop, record, edit, mix, and master your material. Do not wait until the last few weeks of class to start working on your projects. If you do, you will not have time to finish.

 

Weekly Schedule (Approximate)

February 11: First Day of class – Syllabus, Chapter 1 – Introduction

February 18: Chapter 2 – The Music

February 25: Chapter 3 – The Equipment

March 3: Chapter 4 – The Instruments

March 10: Chapter 5 – The Studio

March 17: Chapter 6 – The Control Room

March 24: Chapter 7 – Recording

March 31: Cesar Chavez Day – No Class: Keep reading and working on your projects.

April 7: Spring Break – No Class: Keep reading and working on your projects.

April 14: Chapter 8 – Editing, Part 1

April 21: Chapter 9 – Editing, Part 2

April 28: Chapter 10 – Editing, Part 3

May 5: Chapter 11 – Mixing, Part 1

May 12: Chapter 12 – Mixing, Part 2

May 19: Chapter 13 – Mixing, Part 3

May 26: Chapter 14 – Mastering

June 2: Finals: PROJECTS DUE – Playback at the start of class.

]]>
http://alangardina.com/music-265-b-syllabus-spring-2016/feed/ 0
Digital Audio Recording 2: Pro Tools and Large Ensembles http://alangardina.com/dar2/ http://alangardina.com/dar2/#respond Sun, 07 Feb 2016 04:03:43 +0000 http://alangardina.com/?p=418 Read more

]]>
Digital Audio Recording 2: Pro Tools and Large Ensembles

Digital Audio Recording 2: Pro Tools and Large Ensembles by Alan Gardina

The pre-release of my new textbook Digital Audio Recording 2: Pro Tools and Large Ensembles is available through The Quimby House bookstore, HERE. Digital Audio Recording 2 is the third part in The Quimby House series on recording & music technology. The first book in the series, Basic Audio Recording Techniques by Michael Julian & Marco Monahan, introduces students to the theory & equipment used in the recording studio. The second book, Digital Audio Recording: Basic Pro Tools by Michael Julian & Marco Monahan, introduces students to the Pro Tools recording environment with a hands-on project. The third book, Digital Audio Recording 2: Pro Tools and Large Ensembles by Alan Gardina, expands on the previous two books. Students learn how to…

  • Collaborate with an artist and prepare their music for the recording studio.
  • Set up a recording studio for small bands and large orchestras, including proper microphone placement for a variety of instruments.
  • Use Pro Tools to record a multi-track music project.
  • Use advanced editing techniques, like Beat Detective, Elastic Audio, and pitch-correction.
  • Mix an album with effects & automation.
  • Prepare your finished Master for sale & distribution on CD, Vinyl, and digital downloads.

This is the Preliminary Notes Edition of the book. It is currently text-only, available as a PDF. We are in the process of developing graphics, videos, and other material to supplement the book. As we create the content, it will be updated in The Quimby House online bookstore. If you purchase the PDF download, important revisions, updates, and inserts will be sent to you via e-mail.

]]>
http://alangardina.com/dar2/feed/ 0
Music 265B Week 13 Mastering http://alangardina.com/music-265b-week-13-mastering/ http://alangardina.com/music-265b-week-13-mastering/#respond Wed, 02 Dec 2015 08:01:07 +0000 http://alangardina.com/?p=415 Read more

]]>
PDF version available HERE

Mastering is the final phase in the music production process. During this phase, we turn our finished mix into a Master audio file, from which all mass-produced copies of CD’s, MP3’s, vinyl pressings, and digital streaming files will be made. In other words, this is the copy that we will sell, distribute, or otherwise present to the general public. Because of this, our master copy needs to meet a few “quality control” standards in order for the public to consume it. This step may be as simple bouncing down our mix into a stereo audio file, but the process is often more complicated than that.

Reference Monitoring

People will listen to our song on a variety of devices, like a pair of cheap ear-buds plugged into a cell phone, their car stereo, or inside a club on a massive sound system. This means our one file needs to sound fine across all of these devices. When you mixed, you may have done all of the work through a pair of headphones, or in a home studio on a pair of decent studio monitors. The mix may have sounded fine through those speakers, but if you play it back on your car’s stereo, or have your DJ friend play your track through the nightclub’s PA system, things may suddenly sound out of balance: the extreme low end of the kick drum & bass guitar are overpowering the mix, the splashy high-end of the vocals & cymbals are piercing, and so on. When we listen to a track through a pair of tiny earbuds, headphones, laptop speakers, or through a small pair of studio monitors, these devices usually aren’t capable of accurately representing the full sound spectrum: they usually aren’t large enough to produce the extreme low end frequencies. Beyond that, everything from the configuration of the speakers to the size, shape, and surfaces of the room you’re in will add their own unique “color” to the things you’re hearing: some frequencies will be boosted, others will be cut.

To compensate for all of this, we need to use a trick called Reference Monitoring: we need listen to our mix through all of the devices that our potential fans would use to hear our music, in different environments as well. In other words, we need to actively listen to our mix through headphones, a boom box, a hi-fi system with a subwoofer, and so on. Ultimately, we want a mix that will sound fine on all of these devices. For example, we want our low-pitched instruments to be loud enough to sound clear through a pair of headphones, but not so loud that they blow up a subwoofer. Our highs need to be loud enough to sound clear, without being too piercing across all of these devices, and so on.

Most audio interfaces allow us to listen to our mix through multiple devices (or A/B our mix). Our “main” monitors may be a pair of studio monitors, and our “alternate” monitors may be a set of computer speakers. We probably have a headphone port as well. Some even let us listen to our mix in a mono format. In this case, we can do most of our mastering work right away. If we need to listen on our car stereo, or on another device, we need to Bounce our song down to a stereo audio file. We will cover Bouncing near the end of the lesson.

Less is More

We have already covered a lot of the same tools that we will use during the mastering phase: equalization, compression/limiting, automation, fades, and so on. When we experimented with EQ & compression, we usually made extreme changes: drastic cuts ands boosts in narrow frequency ranges, heavy compression & limiting on certain instruments, exaggerated fades, and so on. When we master a track, we usually make broad, light changes to the overall mix. The idea is that our mix is “close enough” to what we want the public to hear, so it should just need some light polishing, if anything. If you find that the song needs some extreme changes during the mastering phase, then you might just have to go back and remix the tracks. Then again, if it’s not broken, don’t fix it.

Remix

More often than not, we need to make our mix better prepared for the mastering phase. If it is necessary, we might have to go back and alter our EQ, compression, panning, and other plug-ins to fix our trouble spots in the overall mix. For example, in recent years vinyl records have been making a comeback. If we’re planning on cutting our music to vinyl, we need to make a few adjustments to account for the physical limitations of that technology. In fact, we may need to make a specialized mix just for the vinyl format. Since vinyl records make their sound when the needle moves along grooves on the record, sounds that make the needle move faster will compromise the sound quality: the record my distort, and the needle might skip. This includes louder volume, high frequencies (like those crisp sibilants in the vocal track), a wide stereo image (and phasing effects), deep bass, a wide dynamic range, and so on. On top of all this, as the record plays, the grooves get smaller, and the sound quality gets worse. If you mix for vinyl, keep those limitations in mind, and tailor your tracks to account for that. When we mix for a digital format, we don’t have these limitations. All things equal, in the digital world what we hear is what the fans will hear.

The Mix Bus

Since mastering involves making changes to the overall mix, we can work in one of two ways. We can either Bounce our song down to a stereo audio file & master it by itself in a new Pro Tools session, or we can do our mastering work on our main mix track in Pro Tools. If your system has trouble managing all of the plug-in processing, it will be easier to bounce the track down and master it in a new session (See the section on Bouncing to Disk, just be sure to bounce the track into the same bit depth & sample rate as our session).

If we don’t already have a mix channel, then select Track > New and create 1 – Stereo – Aux Input track. This mix track should receive its Input from an empty stereo bus (like Bus 1 & 2, or whatever is available). This track should Output to the system’s main output (like the Master Fader, on Out 1 & 2, or something similar). Next, route the outputs of all of our other tracks into that mix channel’s stereo bus. If we routed all of our instrument groups, like our drums, to their own aux track, then just reassign the output of those aux tracks. The signal flow should look like:

Instruments (audio tracks) output to instrument groups (aux tracks) input.

Instrument groups (and reverb/other effects) output to the Mix track (aux track) input.

Mix track & Master Fader output to the main system output.

With this technique, we will use our mastering plug-ins on the Mix track, and keep our Master Fader set to zero. That way, the volume level on the Master Fader is the true level of our mix. In the end, we never want our Master fader to peak into the red: this will create digital distortion.

Equalization

By now, our mix (or remix) should be relatively complete. As we listen to our mix on different speaker systems, we might need to make a few small adjustments to the overall song’s balance. We might find that the overall low end is still too loud or soft when we listen with a subwoofer, or the midrange is too muddy, tinny, the highs are too piercing, or too dull, and so on. Instead of making the kinds of extreme adjustments we may have used to warp the tone of our instruments, use a multiband EQ plug-in (like EQ3 7-Band) on our mix track to gently boost or cut the troublesome frequency ranges as it is needed.

Dynamics

We can use compression, limiting, and other techniques to alter the overall dynamic level of the song. Compressors and limiters function just like they did during the mixing phase, but we have a few more tools we may use. A Multiband Compressor/Limiter can affect several different frequency ranges. For example, we may want to control some of the pumping low frequencies, without affecting the mids or highs. With a multiband compressor, we can select just the frequency range we want, while ignoring the others. Otherwise, we can use different compression settings on other frequency ranges to get a different effect.

Generally, we may need to control some of the loudest peaks in the songs. We still want to preserve the band’s overall dynamic contrast, but all of the instruments combined may “pump” on the downbeat a little too much. We can Normalize or Maximize a track. Normalization aims to set the track’s peaks to a uniform volume: all parts of the song from the softest to the loudest will be raised or lowered to match the same general level. A little bit of normalization can help a mix sound consistent, but too much will be boring: the loud parts sound more exciting when there is a soft part to create contrast. Maximizing tries to make the overall track as loud as possible, without distorting. A maximizer works just like a compressor/limiter, with an added feature. The Maxim plug-in has a Ceiling setting, designed to prevent our levels from going past a certain level. For example, we can set the Ceiling to -0.1 dB (just below the limit of distortion at 0 dB) to smash the volume down below that ceiling. This can be handy when we need to prevent our tracks from peaking into distortion, but too much maximization can cause ear fatigue. Again, we want to preserve a sense of dynamic contrast.

Other Effects

Some engineers use some kind of Enhancer to add some high frequency “sparkle” to the mix. Enhnacers work in different ways. Some, like an Exciter will add some phase-shifting to the higher frequencies. Others will delay certain frequency ranges and blend them back into the original signal. Plug-ins like these can add some interesting effects to your mix, but they will create some phasing issues if you don’t know how to use them properly.

We can also use EQ and Compression in extreme or unconventional ways. We can filter our entire mix through a very narrow EQ band to make the mix sound like it is playing through a telephone or old record player. To do this, use a high pass & low pass filter to remove all frequencies except for the midrange. A Filter Sweep uses a similar effect, but the EQ range shifts around: this sounds like a guitarist’s wah pedal. Again, we filter the mix down to a narrow bandwidth, but we automate the frequency knob to “sweep” from low to high, and back down again. When dealing with repetitive music, we can tastefully apply these special effects to a small section to create some contrast.

Automation

We can still use a bit of Automation if the track calls for it. If we plan on creating a long studio fadeout, it is best to do this in the mastering phase. Otherwise we might want to use some fader automation to control some of the loudest peaks in the song, as needed.

Dither

Dithering can be a confusing technical process for most beginners. Just know that even if you don’t fully understand how it works, you still need to use dithering when you’re converting a file from a higher resolution to a lower one. When we first started this project, we probably created our session using a high bit rate, like 24 Bit or 32 Bit Float. This higher bit rate gave us a more detailed dynamic range, but our final product will most likely be at a standard CD quality: 16 Bit. Remember, Bit depth refers to the potential loudness of any given sample in our song. 32 Bit & 24 Bit aren’t necessarily louder than 16 Bit, they just have a higher resolution: more potential levels between silence and the same maximum volume. Think of it like money. Let’s say you have $10. You could have a single $10 bill (1 Bit), 10 $1 bills (10 Bit), or $10 worth of change. They all add up to the same total value, but there are more potential places to “round” to the nearest amount as your resolution goes higher.

When audio gets converted from a higher resolution down to a lower one, the loudness of any given sample gets “rounded” to the nearest bit in the dynamic range. This might cause errors, or distortion in the track. To compensate for this, we use a Dither plug-in to reduce this noise. A dither plug-in uses Noise Shaping, kind of like noise cancelling, to prevent these “rounding” errors in the track. Even though we have a fancy Pro Tools rig at home, capable of recording and playing back extremely high resolution files, our fans are still going to listen to our music on extremely dated technology. The average CD player & other digital media devices, process audio at 44.1 kHz, 16 Bit. We should always use the Dither plug-in as the final processor in our signal chain. Place the Dither plug-in on the Master Fader.

Bounce to Disk

Bouncing to Disk lets us convert our multi-track session into a single track that other devices can understand. Before we bounce our tracks down to a single stereo file, we need to make sure our Master fader is not overloading at any given point in the song. If the meter ever peaks into the red, the track will overload and cause some unwanted distortion. To fix this, either lower the overall volume, or use compression/limiting, EQ, automation, and other tricks to make sure we never overload the Master fader.

When we’re ready to bounce, select and highlight a section in the session’s timeline: this represents the section that will be rendered into a stereo file, from start to finish. Typically, this means selecting the first & last clips in the session. If we have reverb and other effects that “ring out” after the last clip has played back, then be sure to highlight this “ringing” space as well. Next, select File > Bounce To > Disk to bring up the Bounce menu. In the Bounce menu, make sure that the Bounce Source matches the Output on our Master fader: this should be the system’s main stereo output (e.g. Out 1-2, Analog 1-2, and so on: every system may be different).

Let’s make a bounced file in Standard CD Quality:

File Type: Wav or Aiff

Format: Interleaved

Bit Depth: 16 Bit

Sample Rate: 44.1 kHz

SPECIAL NOTE: If you plan on mastering your track as a stereo audio file, then bounce your track using the same sample rate, bit depth, and file type as your current session (e.g. Aiff, 48kHz, 32-Bit Float). You can go to Setup > Session to see this information.

File Name: Give the track a unique name, like the title of the song and today’s date. You will most likely bounce down multiple versions or mixes of these songs, so choose a name that will make this easy to find.

Directory: This is the location where the file will be saved. Press the Choose button to change this location. It is a good idea to save these bounced files in a separate folder inside our main session folder. In other words, press the Choose button, find the Pro Tools session file, create a New Folder called “Bounced Files” if one doesn’t already exist, and choose this Bounced Files folder as the directory destination. Don’t save into the Audio Files folder. This way, the session data, and our bounced tracks are all in the same place on our hard drive.

We can create an MP3 version of this bounced file if we check the Add MP3 box. MP3 files have significantly lower audio quality than standard CD quality Wav & Aiff files, but they are also much smaller files.

Before we press the Bounce button, we have the option to use Offline bounce mode. During a normal bounce, the session will play the track and count down in real-time. If the song is 5 minutes long, Pro Tools will play back the track from start to finish as it counts down to the end of the song, and renders it into a finished audio file. This is our last chance to catch any mistakes, stop the bounce process, fix our mistakes, and start over. Otherwise, we can use Offline bounce to quickly bounce the track down to a finished stereo file. In Offline mode, we won’t hear the track playback, but it will bounce and render much faster than real-time bouncing.

Now that our Master file is finished, we should play it back to make sure everything is ok. Load the file into a media player like iTunes, and press play. If everything sounds good, burn a CD, transfer it to your phone, or upload it to a streaming service: you’re done.

]]>
http://alangardina.com/music-265b-week-13-mastering/feed/ 0
Music 265B Week 12 Mixing Part 3 http://alangardina.com/music-265b-week-12-mixing-part-3/ http://alangardina.com/music-265b-week-12-mixing-part-3/#respond Thu, 19 Nov 2015 04:23:49 +0000 http://alangardina.com/?p=411 Read more

]]>
PDF version available HERE

We’re almost done with the mixing phase. So far, we used various plug-ins to shape the tone & texture of each individual instrument. For the rest of the mixing phase, we will focus on Blending & Balancing our tracks together in context. This can be a vague term with many different meanings, but for now, think of blending & balancing as an attempt to recreate the live performance of a band on stage. We will use the Pan knobs to spread the band around on stage within the left & right spectrum, and the Faders to control how in-your-face or far away the instruments may be. We will first set some general levels, and finally use Automation to program any changes over time.

When a band plays together, everyone is expected to play at a certain volume relative to one another. In general, a soloist or featured instrument may “sit on top of the mix” as the loudest thing in the session, and the other instruments will support that featured player. In other words, those other instruments will be somewhat quieter than the lead. Even those supporting instruments have to follow a certain order within each section. The section leader may be the loudest member within that group, with the other instruments backing that part. For example, the lead guitarist is usually louder than the rhythm guitarist, who might be louder than the bass player. First-chair violin may need to be louder than the second-chair violin, second-chair may need be louder than the viola, who may need to be louder than the cello, etc. The melody and focus of the song may even jump around from one musician to another. On top of all this, the entire band may gradually get louder over time or (Crescendo) or gradually get softer (Diminuendo) together as the song moves from section to the next. If you have ever seen an orchestra play, the Conductor will blend & balance the instruments & sections together by gesturing toward the players: up is louder, down is softer. As the mixer, we blend & balance these tracks the way the live conductor would blend & balance the musicians.

Everything Louder than Everyone Else

At first, you may be tempted to just raise the volume on the tracks that “should be” louder in the mix. We may need to raise the volume at some point, but we will eventually run out of Headroom. The volume may get raised so much that it “crashes” into the ceiling at 0 dB, and causes the track or mix to peak into distortion – we don’t want that to happen. On top of this, the overall volume usually changes from one part of the song to the next. The chorus may be louder and more energetic than the verse. If we boosted all of our levels to the limit during the quieter verse, we have nowhere to go when the chorus needs to get louder. We lose the dynamic contrast in a song when everything is louder than everyone else. The song doesn’t have a chance to “breathe” when the volume stays constant. To save us the headache & ear fatigue from listening to an overdriven track, it is usually better to cut the volume on tracks than it is to boost them. We can set our featured track’s level during the loudest, most energetic part of the song, and then turn all of the other tracks down relative to that level. In other words, instead of making a few tracks louder, try turning the other tracks down first.

Looking Ahead, we have one final phase after we finish mixing: Mastering. We will cover the mastering process in the next lesson, but we want to keep one thing in mind as we finish our mix. Since we will do some additional processing and volume changing during the mastering phase, we want to give ourselves a little bit of Headroom in our mix. This means we don’t want our output volume on our Mix/Master fader to clip into the red past 0 dB on the meter. In fact, we want to leave a few dB gap between the level of our mix & the 0 dB limit.

Panning

Remember, the Pan knob controls our track’s relative position within the Stereo Image. Mono tracks have one pan knob, set to the center position by default. This means the track plays at equal strength out of the left & right speakers: it will sound like it is coming from the center area, between the speakers. If we pan a track to the left, it will play louder out of the left speaker, and softer out of the right. If we pan it all the way to one side, it will play at full strength out of that side, and not play out of the other. Stereo tracks have two pan knobs: one panned hard left for the left-field of the track, and the other panned hard right for the right-field.

We recorded our band in odd configurations in order to get better isolation between each group of instruments. More often than not, these aren’t the positions the band would normally use if they were playing onstage at a concert. When we use the pan knobs, we usually try to recreate this “live” performance position. This means our drums, bass, and featured instruments exist near the center of the mix, while our other tracks may be spread around to the left & right. In general, we will pan a stereo configuration, like our drum overheads, to the extreme left & right to accurately reflect our stereo recording.

Balancing Drums

Let’s focus first on balancing a single instrument on multiple tracks: the drums. In a typical drum recording, we used at least one kick drum track, one snare drum, and a stereo pair of overheads. We may or may not have used additional microphones to capture other parts of the kit, but we can start by focusing on these four channels. In most styles of music, the kick & snare drum are the primary focus of the drum kit, and they’re the heartbeat that drives our rhythm section. With that in mind, the most important thing in our drum tracks is the sound of the kick & snare in the mix, but not necessarily our kick & snare tracks. Listen to the stereo overhead drum microphones. Since they were recorded in a stereo configuration, we should start by Panning these stereo tracks to the hard left & right, respectively. If our microphone placement was correct, we should hear the kick & snare drum in the center of our overhead microphones. So how do the kick & snare sound in the overhead microphones? Are they well balanced, relative to the rest of the drum set, or are they too quiet? Are they missing some of the drum’s characteristic attack sound? If so, we can set the overall level for our overhead tracks (we don’t want them to peak into the red on the meter), and then we can gradually raise the volume on our isolated kick & snare tracks until they’re properly blended in the drum mix. If we used additional microphones on the hi-hats, toms, and cymbals, we should first Pan those other tracks somewhere in the left/right field (ask yourself: where do they sit on the drum set, relative to the kick & snare: to the left or right, and how far away?). Next, gradually blend those other drum tracks underneath the overhead drum tracks. If the mix sounds fine without those additional tracks, we can always Mute them. Then again, if they have something that the overheads lack, like more attack, more low end, and so on, then we can alter the EQ on those tracks to enhance the overheads.

For now, we want to find a nice, overall balance for the drums. These levels may need to change as we add in more instruments. To make our job a little easier, we can route all of our drum tracks to a drum “sub mix” track. To do this, create a Stereo Aux Input track, (remember to give it a name, like “Drums”). Route the Output of all of the drum tracks to an Empty Stereo Bus. Next, assign the Input of our new drum mix track to that same stereo bus, and assign the drum mix track’s Output to the main mix. Now, we can raise or lower the fader on our drum mix track to raise or lower the overall sound of the drums, while maintaining the same balance within the drum set. We can even place plug-ins on this drum mix channel in order to process the sound of the entire drum kit.

Other Instruments

Drums will be the most complex multi-tracked instrument to balance, but we can do the same process for other instruments. For our bass & guitars, we may have used a clean direct signal, as well as several microphones on the amplifier. Using the same concept, we can blend our guitar tracks together to complement one another, and we can use the same bus routing trick to send every guitar track to a different guitar sub mix. In that case, create a new stereo aux input track, and follow the same steps: be sure to use a different bus so we don’t mix the drums and other instruments together. Keyboards can receive the same treatment, along with any other individual instrument that we recorded with multiple microphones.

Balancing Within A Section

We may be dealing with families of instruments organized into sections. In a standard jazz big band, there is a Saxophone section with various types of saxophones, a Trumpet section, a Trombone section, and a Rhythm section (piano, bass, drums, guitar, and others). In each of the horn sections, the members play a different part. The section leader (or first-chair) usually plays the melody, or top voice, with the other chairs harmonizing below. Together, they play a chord. To balance within the section, we can place the highest-pitched part on top of the section’s mix, with the other instruments slightly softer in order from the second-highest part to the lowest-pitch. For example, in a typical saxophone section (2 alto saxophones, 2 tenor saxophones, and a baritone saxophone), the lead alto usually has the highest part in the section, followed by the 2nd chair alto sax. Next, the tenor saxophones harmonize below them, with the baritone sax at the bottom of the section. Beyond this, the players are usually physically spread out across the stage, in different chairs. We should use our Pan knobs to emulate this: the lead player (like the lead alto sax) usually sits in the middle (or center) of the section, and the other members are spread out to the left & right. We should also route each section to their own sub mix track, just like we did with the drums.

Section Against Section

Just as each instrument has a part to play in its section’s blend, we need to balance each section against one another to balance our mix. Just as the higher-pitched instruments sat on top of the lower ones in the section’s mix, the violins may sit on top of the lower strings, or the trumpets may play over the trombones. If we used our sub mix track technique, we can simply adjust the volume on each section’s sub mix track. Again, this lets us retain the balance within each section, while raising or lowering the overall section’s level against another section. Going back to our orchestra example, an entire string section may be grouped together on one part of the stage (slightly to the conductor’s left, for example), while the drums & percussion may be spread out across the back row, from left to right. To avoid potential phasing issues, we probably wouldn’t adjust the panning on these sub mix tracks. Instead, we should adjust the pan on our individual tracks instead.

Soloists & Featured Instruments

At some point, someone might take a solo. This might involve the lead guitarist pressing a few effects pedals (completely changing the tone of the instrument), and playing louder than everyone else. If an ensemble player ever jumps out to take a solo, it may be a good idea to put this solo part on its own track, and treat it like a separate instrument in the mix. That way, the solo part can be altered & blended to fit its important role, without altering the rest of the same player’s background parts. Otherwise, we may need to use Automation.

Automation

When we first start mixing, we begin by setting a general fader level and pan position for each track. We can move a fader or knob up or down, and that setting will be applied to the entire track. Sometimes, things need to change over time. The music might have a Crescendo or Diminuendo: it might need to get louder or softer over a short period of time. Then again we may just need to boost the volume on one specific note to create a musical Accent. If the musicians didn’t play the part that way, or if we need to exaggerate the change, we can do this with Automation. Automation will allow us record and program position changes in the volume fader, pan knob, mute button, bus sends, and other functions in Pro Tools. For example, if we “ride the fader” as the song plays, Pro Tools can record our movement, and follow those same motions every time through automation. However, as soon as we record any automation data, our track will lock itself to that position until we reprogram it. Because of this, we should try to get our general settings for the entire track “close enough” to what we want before we start writing automation data.

Tracks have several Automation Modes to choose from, available under the AUTO section in the track’s Mix window, or below the track view selector in the Edit window. Read mode will follow the track’s current automation programming. If we haven’t written any yet, every knob & fader can be changed freely at any time. Write mode will record automation data in real time, much like how we recorded audio. Simply set a track’s Automation mode to Write mode, press play, and go through the motions of moving the faders up & down as the song plays. Even if we don’t touch anything while in Write mode, Pro Tools will record the current fader & knob positions. If we go back over a spot while in Write mode, it will erase any previous automation as it records the current positions. If we record in Write mode and then switch back to Read mode, Pro Tools will follow the written automation. Off mode will ignore any automation data: it will follow the current fader & knob positions.

There are two specialized Automation modes that require a control surface in order to work properly, but they are still useful. Touch mode works like Write mode; it will write automation data as long as you are touching the fader. As soon as you let go, it will move back to the previous automation setting. This is useful when you need to manually correct some automation. The last one, Latch mode works like Touch mode, but it will write new automation after you let go of the fader.

If you don’t have access to a control surface, or aren’t comfortable with flying faders, all automation data can be programmed manually with the mouse through each track’s Automation Lanes, available in the Edit window. To see the Automation Lanes, we can either change the track’s Track View Selector from Waveform to Volume (fader), Mute (button), or Pan (knob). Alternatively, we can show these same lanes by clicking on the dropdown arrow in the bottom left corner of the track in the Edit window. If we have Sends enabled on a track, we can automate those as well.

To manually create automation, click on a track and select one of the Automation Lanes: Volume to control the fader, Pan to control the left/right position, and Mute to turn the track on or off in the mix. In any lane, we will see a line drawn across the lane: this Automation Line represents the fader, knob, or mute button’s current position. For volume settings, up is louder, down is softer. For pan settings, up is left, middle is centered, and down is right. Stereo tracks will have two pan automation lanes: one for the left pan knob, one for the right. For mute settings, up is unmuted, down is muted. There is no in-between setting for the mute button.

Studio Fadeout

Let’s look at a simple automation trick: the Studio Fadeout. We use a fadeout when the band didn’t write a proper ending for their song: they repeat the last section over and over again while the volume gradually fades down to nothing. We can create this effect with some simple volume automation. Open up the volume automation lane on either the main mix channel (like the Master Fader). First, set the mix track’s volume fader to its default position: 0 dB (hold down the Option button and Click on the fader to set it to 0). Typically, the Master Fader should always be set to 0. Next, find the place where we want the fadeout to begin, and use the Grabber tool to click on the Automation Line. A Dot will appear, marking an automation point. This dot will lock the channel in place. Next, go to the place where the song should finish fading out (like the end of the recorded audio), and make another dot. Click and Drag this second dot down to the bottom of the lane. If we did everything right, the first dot will remain in place at the 0 dB mark, and the line will slope down to this second dot. From now on, this mix fader will fade out along the line whenever we play this section.

Volume Automation

Throughout the song, we may want to raise or lower some of the faders from time to time, in order to create accents, crescendos, and diminuendos. We can automate the fader volume just as we did during the fadeout, but we will use more dots. To create a smooth transition, place two dots before the section that we want to raise or lower, and two dots after. The outer dots will preserve the track’s volume settings before and after the automation. We can raise or lower the two inner dots to change the volume. A steep slope in the automation line will create a more rapid change, while a more gradual slope will take more time.

Delay Automation

When we use a special effect like Delay, we might not want to leave it on all of the time. For example, using a regular delay on a singer’s vocal track may cause the lyrics to sound muddled. However, adding an echoing delay effect to the last word or syllable in a phrase might sound tasteful. To do this, we can automate the Mute button on one of our Sends. This method can be a little bit complicated at first.

Let’s assume we have a vocal track outputting to the main mix. We might have a separate Stereo Aux track for reverb, with a Send going from the vocal to the Reverb channel’s input. We can use this same routing process for the delay: Create a Stereo Aux Input track with a delay plug-in on one of the inserts. On the vocal track, we can activate another Send, which is routed to an empty stereo bus. We’ll need to assign that same bus to the Delay track’s input. Next, we should raise the Send Fader on our vocal track until we can hear the delay effect alongside our vocal. Next, we will automate the Mute button on our delay Send. In the Edit window, go to the vocal track and select the delay Send’s Mute automation lane. This time, we should start in the Mute position (bottom of the lane). Since we want to add the delay effect to specific words & syllables, we can use the Smart Tool to highlight the lane around those syllables, and drag the automation up into the Unmute position. Now, the Send will activate at that syllable, and quickly deactivate so no other words make it through to the delay. After this, we can click and drag the dots around to correct any mistakes.

Recap

During the mixing phase, we shape the tone and texture of our tracks through Equalization (EQ), and other signal processors. We use the pan knobs and volume faders to replicate the sound & performance of the band onstage. We use automation to create changes over time. Once we have finished mixing our tracks, we prepare our songs for release through the Mastering process: our next lesson.

]]>
http://alangardina.com/music-265b-week-12-mixing-part-3/feed/ 0
Music 265B Week 11 Mixing, Part 2 http://alangardina.com/music-265b-week-11-mixing-part-2/ http://alangardina.com/music-265b-week-11-mixing-part-2/#respond Thu, 12 Nov 2015 00:41:50 +0000 http://alangardina.com/?p=406 Read more

]]>
PDF version available HERE

In the last lesson, we experimented with some basic Signal Processors (plug-ins) like Equalizers (EQ), Compressor/Limiters, Expander/Gates, De-Essers, and Reverb. We used them on our tracks in a few ways. AudioSuite plug-ins let us process the sound of individual clips in the Edit Window, and plug-ins on the Inserts let us affect the entire track. When navigating through the various plug-in menus, you may have noticed a few plug-in categories. Under the Setup > Preferences menu, navigate to the Display tab. In the “Organize Plug-In Menus By:” option, select Category & Manufacturer. This is fairly self-explanatory: the Manufacturer is the company that released the plug-in, like Avid or Digidesign. This can be a useful menu for users with many Third-Party plug-ins: plug-ins from different manufactures, which do not come standard with Pro Tools. Category refers to the different types of plug-ins available to us. Each category processes our sound in different ways. Any Plug-in from any manufacturer, no matter how simple or advanced will fall into one of these categories. Let’s break them down.

Equalization (EQ)

An Equalizer (EQ) is a plug-in that raises or lowers the volume of Frequencies within a Bandwidth. In its simplest form, this may consist of some simple Treble (high-frequency) and Bass (low-frequency) gain (volume) knobs, preset to a specific frequency range. A Passive (subtractive) equalizer can only reduce the gain on a frequency band, while an Active EQ can boost and cut.

A Graphic EQ is more elaborate, but follows the same concept as our simple EQ. Graphic units have multiple (several dozen) EQ bands, locked to specific frequency ranges. These multi-band graphic EQ’s are usually used with studio monitors and guitar amplifiers to balance or compensate for the “sound” of the room.

A Parametric EQ is more advanced. These contain multiple EQ bands, with adjustable frequency ranges, referred to as the Q. Parametric equalizers also include a High-Pass Filter (or low-frequency roll-off) and Low-Pass Filter (a high-frequency roll-off). The stock EQ3 7-Band plug-in is a Parametric Equalizer.

Dynamics

Dynamics plug-ins affect the volume and overall dynamic range (the difference between quiet and loud) of our tracks. They include Compressors, Expanders, De-Essers, and others.

Compressors & Limiters restrict the dynamic range of our tracks. While they can raise the overall level of our tracks (making the quietest parts louder), they are primarily used to reduce the loudest peaks in volume (making the loudest parts quieter). When our track gets louder than the compressor/limiter’s Threshold, the plug-in will lower the output by a certain Ratio. For example, when a compressor is set to a 2:1 ratio, if the raw signal would normally peak 2 dB (Decibels) above the threshold, the compressor would only let the signal get 1 dB louder than the threshold, 4 dB would only get 2 dB louder, and so on. A compressor with a very high ratio (e.g. 20:1) is called a Limiter. A Multi-Band Compressor can compress several different frequency ranges in different ways. Heavy compression can make a track sound thicker and generally louder, but extreme compression can make a track sound saturated, or even distorted. Extreme compression can be fatiguing on our ears.

We can use a compressor in several different ways. Normally, we place a compressor plug-in directly on one of our track’s Insert points, like we do with most other plug-ins. However, we can use a signal from a different track to trigger another compressor on our track: Side-Chain Compression. For example, we can place the compressor on our bass track, and create a Send on our kick track to Bus the signal over to our compressor’s Key-Input. Now, whenever kick drum is played, the compressor will activate, and gently bring down the level of our bass: this lets our kick drum pop out in the mix a little bit more.

Another technique is called Parallel Compression. In this case, we duplicate a track’s signal, heavily compress one of the signals, and gently balance the sound of this compressed track against the uncompressed version.

Expanders & Noise Gates work like compressors, but they are used to exaggerate the dynamic range of a track. When a signal gets louder than the expander’s threshold, the expander will boost the gain on those peaks. This in turn makes the loudest parts of a track even louder. A more extreme version of this is a Noise Gate. The gate suppresses the track’s sound until it gets louder than the threshold. When the threshold is crossed, the gate “swings open” allowing the sound to pass through until it closes again. In this case, gates are used to control some of the unwanted background noise in tracks.

A De-Esser tames the Sibilant and Plosive sounds (sharp S, P, T, K and other consonant sounds) in our vocal tracks. The De-Esser combines elements of the EQ & compressor. When a certain Frequency range gets louder than the de-esser’s Threshold, the plug-in will temporarily lower the level of that range. This way, we can preserve the normal high-end brightness of our vocals while taming the unwanted loud parts.

A Channel Strip plug-in combines elements of the Equalization & Dynamics plug-ins into one single plug-in. It may contain an Equalizer, Compressor, Noise Gate, and other filters into one plug-in. Some versions, like Avid’s Channel Strip plug-in allow us to change our FX Chain: the order in which the signal is processed. We may want to equalize the signal before we compress it, or vice versa.

Pitch Shift

Pitch Shift plug-ins, as the name implies, alter the pitch of a track. This can include plug-ins like Avid’s Pitch, which can Transpose a signal up or down by Semitones & Cents. Popular Third-Party plug-ins like AutoTune and Melodyne are used in Pitch Correction, which attempts to fix the Intonation of a track. We used many of these plug-ins during the end of the editing phase.

Reverb

Reverb attempts to recreate the way soundwaves get reflected and warped in a Space. When a soundwave radiates out from its source, the wave hits our ears or microphone directly: this is the “dry” sound of the instrument. Since the instrument is played in a room, the same soundwaves eventually bounce and reflect off of the walls. These Reflections continue to Diffuse & Decay in the room, until they eventually reach our ears at different times, depending on the Size and dimensions of the room. The “space” could be a room, hall, church, or chamber of various sizes (small, medium, or large). Reverb can also be created with a metallic device like a metal Spring or Plate. Different spaces and types of reverb have their own unique characteristics.

Rather than inefficiently placing a reverb plug-in on every track in our session, we typically set up a single reverb plug-in on its own channel, using a stereo Aux track. We can then use our Sends to bus the signal over to our reverb channel. This way, we can place our entire band back in the same “room” inside our mix.

Delay

Delay plug-ins are used to create an echoing effect. We set up Delay plug-ins using the same technique as our Reverb channel. A classic tape-delay unit used a set length of tape, looped together in one continuous band. The tape would record the incoming signal into a small loop, and play it back when the tape looped back around, with a slight delay. By adjusting the speed of our delay unit, we can cause the delayed sound to land on a steady rhythmic beat, like a quarter note or an 8th note.

Modulation

Modulation plug-ins contain some of the effects & stomp-boxes found in a guitarist’s pedal board. Most of them operate in the same way: they duplicate our incoming signal, process one of the copies, and blend the two signals back together. A Phase Shifter, or Phaser, uses a sweeping EQ filter to remove some frequencies from our signal, like a guitarist’s Wah-Wah pedal. When the two signals are blended back together, some frequencies cancel each other out, creating a long, rippling effect in the signal. A Flanger uses a short, modulating delay instead of a sweeping EQ. A Chorus uses a longer delay: when the signals are recombined, the sound tries to mimic the vibrato sound of a group of singers (a chorus). A similar effect called a Doubler modulates the pitch & delay of a signal to imitate the minor differences we might find between two separate performances of the same piece. In the end, the doubler makes it sound as though the singer or instrumentalist doubled their performance, even though we only used one take.

Harmonic

Harmonic plug-ins include other guitar pedal effects, like distortion, fuzz, overdrive, and others. Overdrive mimics the sound of a tube guitar amp that has been turned up a little too far. Eventually, the signal gets loud and distorted enough to produce some aesthetically pleasing noise. Fuzz was an early precursor to the distortion pedal: it creates a musically dirty, gravel-like tone. This is the iconic sound of guitarists like Jimmi Hendrix. Distortion takes this tone to extremes. It typically compresses and drives the signal to the point where it makes harsher, but still aesthetically pleasing distortion.

Harmonic plug-ins also include complete guitar & bass amp simulators. This can include everything from pedals & stomp-boxes, to heads & amps, speaker cabinets, rack effects, and even a simulation of different microphones placed around the speaker cone. If we recorded a clean direct signal from the guitar or bass, we can process the signal through one of these simulators.

Noise Reduction

Noise Reduction plug-ins are some of the specialized tools used in audio restoration. We use these to try to clean up and salvage poorly recorded tracks. Some noise reduction plug-ins attempt to remove clipping & pops, or strip out unwanted buzz & background noise from our tracks.

Dither

We use Dither plug-ins during the mastering phase to go from higher bit-depths to lower ones. For example, we may have recorded our session in 32-Bit Float, or 24-Bit for higher audio quality. Eventually, we will mix down to a standard CD format, at 16-Bit. Dithering helps with this conversion process. We’ll cover this during the Mastering phase.

Sound Field

Sound Field plug-ins can include phase scopes, spectrum analyzers, level meters, and other visual tools used to analyze our sound – those tools don’t do anything to change the sound, they just show us what is happening. However, Sound Field plug-ins include a few processors that do alter our signal. Some plug-ins try to shift, alter, and widen a signal within the stereo image (the left & right spectrum). We normally use the Pan knobs to place a sound somewhere within the left or right sound field, but some of these plug-ins can make a track sound “wider” than they really are.

Instrument

Instrument plug-ins are the virtual synthesizers, drum machines, samplers, and other noise-making modules that we use with MIDI data & Instrument tracks. MIDI provides, the notes, but these plug-ins make the actual sound. Since these Instrument plug-ins are the Sound Source for our MIDI tracks, they should always be placed on the Instrument channel’s first Insert. Other plug-ins, like EQ and compressors should be placed underneath.

Other & Effect

Some categories like Effect and Other can be seen in the list. These are usually plug-ins that can serve multiple functions, or ones that weren’t assigned a specific label (like Equalizer) by the manufacturer. However, it is possible to see one plug-in fall under multiple categories. For example, the Channel Strip is available under both the EQ and Dynamics category.

AudioSuite Utility Plug-Ins

If we click on the AudioSuite menu, we can see an almost identical list of plug-ins. However, we have a few utility plug-ins that we an use to fix some problematic clips, or create new effects. In the AudioSuite menu, navigate to the Other category. Remember, AudioSuite plug-ins will render audio into new clips.

DC Offset Removal will remove any DC offset noise from our clips – that is a loud signal at 0 Hz, usually created by bad audio conversion.

Duplicate will make a new non-destructive copy of the audio clip in the clip list.

Gain will raise or lower the overall volume of the selected clip. This serves the same function as the regular Trim plug-in.

Invert will flip the polarity on a clip. If two tracks are out of phase, this inversion may correct some of the phasing issues.

Normalize will raise or lower all of the peaks in a clip to the same average volume, eliminating the original clip’s dynamic contrast. This works a bit like our dynamics plug-ins, but it can also raise the level on unwanted background noise.

Reverse will render the clip backwards in time. The file will still start at the same time as the original, but all content in the clip will be reversed. To create special effects like a reverse snare hit, simply highlight a snare drum’s hit, from the start of the transient to the end of the wave, and render the clip in reverse. After that, line up the new peak with the original peak to keep the snare in time.

Signal Generator can be used to create different types of sounds, like Sine, Square, Triangle, and Sawtooth waves at any frequency, or White Noise & Pink Noise.

Time Compression Expansion is identical to the TCE time stretching we used earlier in the editing phase. It can, to a certain extent, speed up or slow down clips while maintaining their original intonation.

Backwards Reverb

AudioSuite lets us create a few cool effects like backwards reverb – that is when we hear the sound of the reverb unit before the singer or instrument makes a sound. Using a normal signal flow, there is no way to do this in real-time: it is impossible to do this live. Instead, we can use AudioSuite to perform a few editing functions. Let’s say we wanted to add some backwards reverb to the first word of a phrase for dramatic effect. First, we need to Select the first word or syllable in the phrase and Separate it. Next, we need to highlight that clip, along with a few seconds of “dead air” in front of the clip (the space where we want to the reverb to come in). Next, we Reverse the clip & the dead air, so they play backwards in time. In AudioSuite, we need to find a Reverb plug-in, dial in the right settings, and Render the backwards clip/dead air into one continuous clip. We should now hear the backwards vocal with a long reverb tail after it. If we Reverse it again, we should have a long reverb tail, with a vocal at the end, playing normally. Thankfully, AudioSuite made this task easier on us. The AudioSuite reverb plug-ins have a Reverse button that will perform those functions for us: reversing, rendering, and reversing again.

Continuing to Mix

Where we use plug-ins, and in what order we use them will change the way our tracks sound. AudioSuite plug-ins affect the sound at the Clip level. The sound of these clips are then processed through the Inserts, in order from the first insert to the last. We might use an EQ on the first insert to clean up noise or carve out frequencies we don’t like. Next we may use a compressor on the second insert: the EQed sound gets processed through the compressor. We may decide to use other effects to alter the tone on the next few inserts. Eventually, we may blend this processed signal alongside a reverb or delay unit on a separate track, or process a family of instruments together on a submix.

In our last lesson, we began to alter the EQ & dynamics of our tracks. We learned how to create a reverb bus to efficiently give our tracks the same “room” sound. Now that we have more plug-ins at our disposal, we can shape our tone & texture in new ways. We can add effects to guitars, experiment with delay, and others. For now, we just want to find a nice overall texture for our tracks. In our next lesson, we will focus on blending & balancing our tracks within each section and within the ensemble.

]]>
http://alangardina.com/music-265b-week-11-mixing-part-2/feed/ 0
Music 265B Week 10 Mixing, Part 1 http://alangardina.com/music-265b-mixing-part-1/ http://alangardina.com/music-265b-mixing-part-1/#respond Thu, 05 Nov 2015 05:55:58 +0000 http://alangardina.com/?p=403 Read more

]]>
PDF version available HERE

We may have prepared a rough mix for the band during the recording & editing phase, but we don’t start the actual mixing phase until all of the song’s elements have been recorded and edited into the right place. When we mix a song, we try to accomplish a few goals

  1. Bring out the ideal timbre in each instrument, with respect to the song’s style.
  2. Blend & balance the individual instruments into an ensemble performance.
  3. Prepare the song for the mastering process.

Mixing With A Purpose

Mixing can be extremely subjective, and most mixing decisions come down to a matter of personal preference. There is no one true way to mix any given song, but there are some things to consider. Different instruments and many styles of music have their own “classic” stylistic sounds. A guitarist may want to emulate someone else’s iconic guitar sound, or a band may bring in their three favorite albums from similar artists and say, “Make us sound like those guys.” Your client may not be as good as their musical heroes, but having some points of reference will guide the mix in the right direction.

Use Your Ears, But Don’t Trust Them

Your ears will play tricks on you. Everything from the room’s acoustics to the speakers’ frequency response will “color” the sound that you’re hearing: some frequencies will get boosted, others will get cut, and the way that sound travels through the room will affect how the mix sounds to us. These changes will also affect how the mix “translates” to other systems. In the end, what sounds like an awesome mix in your home studio may sound terrible in your car. To help prevent this, most engineers will listen and A/B the mix through several pairs of speakers, like a relatively “accurate” pair of studio monitors with a subwoofer, a pair of consumer-grade computer speakers, headphones, or an old boom box. In the end, the best mix should sound good on any of these devices.

Another important thing to consider is Ear Fatigue. As you sit in the same room listening to the same tracks on the same pair of speakers for hours on end, your ears will gradually get desensitized. To keep our ears fresh, it is important to take a break every so often to rest our ears. We can even try to “reset” our hearing by taking some time to listen to something that sounds completely different for a few minutes. A change of sound and a change of scenery can help. If you’re mixing for long periods of time, take a break and come back a day later with a pair of fresh ears. What sounded great at the end of the day yesterday might sound awful today, now that our ears have had some time to rest.

Preparing a Session for Mixing

Our computer has a limited amount of processing power to spare. When we had to record musicians as they played along to our tracks in Pro Tools, we needed our system to have minimal latency. A long delay in their headphones would have hindered the musicians’ performance. Since we’re done recording, we can dedicate more of our computer’s power toward processing the plug-ins in our mix, without worrying about a lag in the playback sound. Go to the Setup > Playback Engine menu. These settings will vary from system to system. Find the H/W Buffer Size setting. Raise the hardware buffer size to its highest setting. Under Host Processors, select the 2nd highest option available: for example, if you are using a quad-core laptop, allow Pro Tools to use 3 out of the 4 cores. This will dedicate the majority of your computer’s CPU cores to Pro Tools’ engine, while leaving one available to process the computer’s background tasks. Set the CPU Usage Limit to roughly 80% to start. Set the Delay Compensation Engine to maximum to start. These settings should have a nice tradeoff between raw processing power, and an ability to handle other system tasks. If Pro Tools is frequently suffering from error messages during playback, change these settings around until you find something that works for your system. To see how hard your system is working, select Window > System Usage.

Starting to Mix

Before we recorded our tracks, we stressed the importance of microphone choice and proper placement in order to capture the best possible sound from our instruments. During the setup for our recording session, we used proper isolation to minimize the bleed & leakage from other instruments, and we used special settings like high pass filters on our microphones. Some microphones were better suited for certain sound sources, and we even used multiple microphones on the same instrument in some cases. During the editing phase, we chose the best takes, in terms of performance & sound quality. We also used editing tricks like Strip Silence to clean up the “dead air” in our tracks. Hopefully, after all that work, our tracks sound great. However, the sound of these tracks might not be the exact sound we’re looking for in our mix. Whether we need to gently sweeten our tracks, or completely warp the tone of each instrument, we use some of the same basic tools across all of our tracks: Equalization, Compression, and Reverb. These aren’t the only tools we will use, but they’re often the first plug-ins we turn to.

Edit & Mix Window Signal Flow

Just so we have a clear understanding of how our sound gets processed through Pro Tools, let’s review our signal flow.

Signals get into our tracks through the Track Inputs (the Interface, or an internal Bus from another track). They are recorded as Audio or MIDI Clips, depending on the kind of data we are working with. These raw clips are edited, which usually involves trimming, moving, quantizing, and warping with functions like Strip Silence, Beat Detective, and Elastic Audio. They may be processed and rendered into new clips with AudioSuite plug-ins. The clip’s playback volume may be altered with Clip Gain, as well as Fades at the start & end of each clip.

After all of those functions in the Edit Window, the sound of our clips travel through the tracks’ 10 Inserts in order from the first insert at the top of the channel, to the last. The inserts are where we stick the majority of our plug-ins, including software instruments (for MIDI clips), Equalization (EQ), Compression, and other effects, which we will discuss throughout the mixing phase. A portion of the track’s signal can be split off and routed somewhere else through any one of the track’s 10 independent Sends, located below the inserts. Sends can be used to send this secondary signal through one of Pro Tools’ internal Busses, or through another path. We may have used these in the past to create a separate headphone mix while we were recording, but during the mix, we typically use these for special effects like Reverb & Delay on a separate Aux track, among other things, which we will discuss later.

The track, and each individual send has a Pan knob, which can be used to shift the signal around between the left & right sides of the stereo image. Tracks & sends also have a Fader, which controls the track’s output level. The Mute button will silence the track or send’s output. Sends can be set to Pre-Fader mode, meaning the send’s output level will be independent from the track’s fader/output volume. Lastly, the entire signal exits the track through the Output section of the I/O. We can use the track’s output to route the entire signal to another track, like a submix on an Aux track, or out of the system to our ears through the Master Fader.

Plug-Ins

Standard Plug-Ins are signal processors that affect the sound of an entire track. When we activate a plug-in on a track’s Insert, the sound of the clip flows through the plug-in, in order from the first insert to the last. Processing a signal with regular plug-ins is non-destructive: we can add, remove, or make changes to a plug-in’s settings at any time without changing the original sound of the clips in our edit window. In other words, inserted plug-ins only affect what we hear during playback. Since this applies the same setting across the entire track, we use these inserted plug-ins for the majority of our mixing work. On the other hand, AudioSuite plug-ins (available through the AudioSuite dropdown menu) are used to make changes to individual Clips in the edit window. We often use AudioSuite plug-ins to create special effects or “fix” problematic audio clips in specific sections, rather than processing the entire track. When we select a clip and process it with an AudioSuite plug-in, our changes get rendered into a new clip, which replaces the old one in the edit window.

In other words, to process the entire track’s signal, activate and use plug-ins on the track’s Inserts. To make changes to individual sections and clips, use AudioSuite plug-ins.

Delay Compensation

Imagine two runners on a track. One lane is clear, but the other is full of hurdles. If the two runners race at the same speed, the runner who has to jump over the hurdles will eventually lag behind the runner in the empty lane. To stay in sync, the runner in the empty lane may need to slow down. When we use powerful plug-ins or add more to certain tracks, the system has to work harder in order to keep up with all of the processing. Normally, the sound of tracks with lots of plug-ins may eventually lag behind our other, like the runner jumping through hurdles. Pro Tools tries to mitigate this with Delay Compensation. When a track’s playback is lagging behind the others (delay), Pro Tools will delay all of the other tracks by that same amount (compensation) in order to make sure the tracks are heard in sync with one another.

Frequencies

The sound spectrum is arranged from low to high-pitched frequencies measured, in Hertz (abbreviated Hz: 1000Hz is abbreviated to 1kHz, or kilo-Hertz). In digital audio, the audible range is typically 20Hz to 20kHz. We can divide this spectrum into several distinct areas.

The Low frequencies (~20-60Hz) are the chest pounding bottom end of the frequency spectrum. They include the fundamental pitches of most kick drums, and the lowest notes on the bass guitar. These frequencies are usually played back through a subwoofer: a standard pair of studio monitors may not be able to accurately play these frequencies. Too much of these extreme lows can sound muddy.

The Low-Mid frequencies (~60-250Hz) include the low fundamental pitches of most instruments. This is the warm, fat, and punchy low end of most instruments. Too much low-mids can sound boomy, and too little can sound thin.

The Mid frequencies (~200Hz-2kHz) include the high fundamental pitches and lower overtones of most instruments. It is one of the widest ranges in the spectrum, covering 3 octaves on the piano starting at middle C. These ranges cover the attack sound of most string instruments. The midrange has a lot of troublesome areas. ~200-500Hz can provide a nice fullness, but too much will make our drums sound like cardboard boxes. ~500Hz-1kHz has a honking hornlike quality. ~1-2kHz can have a tinny, nasal quality.

The High-Mid frequencies (~2-6kHz) include the upper overtones of most instruments. This range covers the attack sound of drums & cymbals, and part of the Sibilance range in vocals (hard consonant sounds around 4-8kHz). Boosting the high-mids can help instruments pop out & cut through the mix. Too much can be fatiguing.

The High frequencies (~6-20kHz) include the highest overtones, and the upper sibilance range. Boosting this range can add brightness, brilliance & sparkle. Too much can sound brittle, too little can sound dull. As we get older, and lose our hearing, these higher frequencies are typically the first ones lost. Many people can experience a significant loss in frequencies above 15kHz.

Equalization (EQ)

Equalization (EQ) plug-ins let us boost or cut back specific frequencies within the audible sound spectrum (20Hz-20kHz). Pro Tools comes standard with a powerful Equalizer plug-in called EQ3 7-Band. As the name implies, we can use this EQ plug-in to boost or cut back up to 7 different bands (frequency ranges) within the sound spectrum. The plug-in displays our current settings on a color-coded graph: each color representing the different bands on the EQ. The vertical numbers (measured in Decibels, dB), represent a change in the EQ curve; positive numbers represent a boost, 0 is no change, and negative numbers represent a cut in the curve.

In the upper left corner of the plug-in window, the Input knob can boost or cut down the incoming volume (Pre-EQ). The Phase Reversal button (a zero with a slash) will flip the incoming signal 108 Degrees out of phase, when pressed. The Output knob controls the outgoing volume from the plug-in (Post-EQ). These knobs are defaulted to 0 dB (no change).

Beneath the Input & Output controls are the filters. The High-Pass Filter (HPF), or low-cut, can be used to roll-off the Low frequencies, just like the roll-off switch on our microphones. Activate it by pressing the IN button next to the HPF section. We can choose the shape of the HPF curve: either a Low-Shelf (default), which will affect all frequencies below the Frequency knob’s setting, or a Bell Curve which will affect a narrow bandwidth. Use the Q knob to alter the bandwidth and intensity of the filter.

The Low-Pass Filter (LPF) works just like the high-pass filter, except it rolls off the high frequencies, allowing the lower ones to pass through.

The bottom section of the EQ3 plug-in contains 5 individual EQ bands, corresponding to the Low Frequencies (L), Low-Mid Frequencies (LMF), Midrange Frequencies (MF), High-Mid Frequencies (HMF), and the High Frequencies (HF). Each section contains 3 knobs: a Frequency knob to select the center of the curve, a Q knob to adjust the width, and a Gain knob to boost or cut back this frequency range. These bands don’t need to be used within those specific ranges. When combined together, the bands may overlap and create different EQ curve shapes.

EQing a Track

Back in our discussion of microphone choice & placement, we described the frequency ranges of many different instruments, and some of their sonic characteristics. Every note has a fundamental pitch (its lowest frequency) followed by a series of overtones. If we know what an instrument’s lowest note is, or how it is tuned, then we can start by eliminating all of the frequencies below that lowest fundamental pitch. Let’s imagine we are trying to EQ a violin in standard tuning. When the strings are in tune, the lowest string is tuned to the note G3 (195Hz). In this case, anything below 195Hz is not coming from the violin: we can consider it to be noise, since it probably contains leakage from other instruments. We can activate our High-Pass Filter (HPF), set it to 194Hz, and chop off every frequency below that. From there, we can listen to the sound of the violin, and adjust frequencies to our preference. The lows may sound thin, so we boost them. The mids might be ok, so we leave them alone. The high-mids may sound scratchy, so we cut them. The high overtones may sound dull, so we boost them to add some sparkle.

In some cases, we don’t know where the sounds we hear are coming from. In that case, we have to use our ears & EQ bands to figure out where the problem areas are. Most drums don’t tune to a specific pitch. Instead, the head is tuned to the shell’s lowest possible pitch. To figure out where this is, we can solo the snare drum, activate the EQ plug-in, and enable the High-Pass Filter. While the snare drum plays, we can gradually raise the frequency knob until we hear the lows of the snare drum disappear. Then, we can gradually turn the frequency knob back down until we find the exact spot where the snare drum’s lows come in clearly, while removing the low kick drum leakage from the track. The frequencies immediately above this area should be the low frequencies of the snare drum. We can center our Low Frequency band over this area, and either boost or cut back this section to our liking. We may have to figure out where the attack sound of the snare drum is coming from. Since this is buried somewhere in the middle of the track, we’ll need to use a hidden feature in the EQ3 plug-in: Band Pass Mode.

 

Band Pass Mode lets us effectively solo an individual EQ band inside the plug-in. To use it, hold down the Control & Shift keys, and click on one of the EQ bands. This will bypass the Gain knob, but it will let us hear just the frequencies inside of our selected EQ band. While the track plays, we can still click and drag on the Frequency & Q knobs, and scan back & forth through the spectrum until we find the sound we’re looking for. Once we’ve found it, we can let go and adjust the Gain on this band to boost or cut back these frequencies. This trick is great for finding trouble spots, like the unwanted ring in a snare drum, the nasal sound of a vocalist, and so on.

Boost or Cut?

Equalization will raise or lower the volume of specific frequencies within a track. In the end, this will usually make the overall track louder or softer in the mix. There are a few ways to approach this. With Additive EQing, we boost the frequencies we want to hear (the good parts), and then turn down the overall volume to match the track’s level. In Subtractive EQing, we cut the parts we don’t want (the bad parts), and raise the overall volume to match the track’s level. In the end, if we’re boosting, cutting, and then adjusting the output gain to match, then there is no difference. However, since most people don’t bother to match the overall volume when they EQ, their tracks often wind up getting louder and louder until they’re in danger of clipping. Because of this, as a general rule, it’s usually better to cut when we EQ. If you’re not sure where to start, the EQ3 7-Band plug-in comes with a wide variety of Presets available in the Preset dropdown menu. Look for the words “<factory default>” near the top of the plug-in window. These Presets are good starting areas for finding an ideal sound for any given instrument. They may not be exactly what we want, but with some fine-tuning, they can be a timesaver. However, you should always trust your ears, not the preset: their idea of how an instrument should be equalized may not work for your recording.

One Instrument, Multiple Microphones

Having multiple microphones on the same instrument can be tricky. We may have used two microphones each on the kick & snare drum. In this situation, we might have each track focus on what the others lack. If we used a sub microphone and a regular dynamic on the kick drum, the sub might have great low end, and terrible highs. The dynamic might have decent lows, and strong mids & highs. In this case, we can focus our sub track on the fat bottom end of the kick: use a low-pass filter to remove the highs (and bleed from the rest of the kit) on the sub, and blend these lows into the mix. The dynamic microphone’s track may be EQed to focus on the kick drum’s attack and punch in the mix. Individually, these tracks may not sound good at all. When blended together, these two signals should produce a massive, full-range kick sound. For the rest of the drum set, our overhead microphones usually provide most of the overall drum sound in our recordings. With this in mind, we can use our individual snare, tom, and cymbal microphones to focus on each drum’s attack sound, which may be lacking in the overheads. We can then gently blend these other microphones under the main overhead signal. One signal always supports what the other one lacks.

Dynamics: De-Essing

Vocals can be problematic. In general, we want to preserve a singer’s bright overtones, but the Sibilance can be too harsh. Any time the singer uses a sharp, hard consonant sound like “S” as in the word Sibilance, “T” as in Time, “ K” or “C” as in Car, and so on, these frequencies might be too loud. To counteract this, we can use a De-Esser plug-in. Whenever one of our troublesome sibilant Frequency gets past a certain loudness (or Threshold), the de-esser will reduce the signal’s level by an adjustable amount. Pro Tools comes standard with a plug-in called Dyn3 De-Esser, available under the Dynamics plug-in category. Near the bottom of the plug-in, the Frequency knob controls the frequency we want to target. The Range knob determines how much the signal will be reduced. Under the Options section, HF Only will suppress just the high frequencies (anything above the Frequency knob) instead of the overall signal. The Listen option lets us hear just the frequencies that are currently being affected. Use this to fine-tune the de-esser’s settings.

De-essers are great on vocals. Since we use them to correct a problematic signal, they usually work best when placed in front of an EQ plug-in (one of the inserts above the EQ) in the signal chain. This lets us focus on enhancing the sound of a smoother vocal use our EQ. Later on, we may even decide to place a de-esser in front of our reverb track in order to control some unwanted sibilance.

Dynamics: Compressors & Limiters

A Compressor plug-in is used to reduce (compress) the dynamic range (the contrast between soft and loud volumes) of a signal. Overall, it can make the quietest parts of a track louder, and the loudest parts quieter. Pro Tools comes with the Dyn3 Compressor/Limiter plug-in, among others. Compressors have a Gain knob that controls the level of the incoming signal. This part of the compressor is used to raise the overall volume of a track. In other words, this gain function makes the quiet parts louder. The rest of the compressor is designed to suppress the louder parts of the track. Whenever a track gets louder than the compressor’s Threshold setting (measured in Decibels, dB), the compressor kicks in, pushing back against the signal. How hard the compressor pushes back depends on the Ratio we use. For example, with a 2:1 ratio, when the signal peaks at 2 dB louder than the threshold, the compressor will only allow the signal to go 1 dB louder. If the signal goes 4 dB over, the compressor only allows it to get 2 dB louder. A Gain Reduction (GR) meter shows how much signal is being reduced during this process. A Limiter does the exact same thing as a compressor, but limiters typically operate on a much larger ratio: 10:1, 20:1, or ∞:1. While a compressor will still allow a signal to peak above the threshold, a limiter with a high compression ratio will usually prevent the signal from going too far overboard. We typically use this during the mastering phase to prevent any potential clipping.

The Dyn3 Compressor/Limiter has a graphic display. The orange vertical line represents the Threshold. The white line to the left represents the level of the incoming (uncompressed) signal, and the line to the right represents the compressed signal. The Ratio determines the slope of the compressed line. The point where the threshold and ratio intersect (the orange & white lines) is called the Knee, because sit looks like a bent knee. This Knee curve lets us choose between having a hard transition from the uncompressed to the compressed signal (a “Hard Knee”), a gradual transition (“Soft Knee”), or somewhere in between.

We can adjust how quickly our compressor reacts to a loud signal. A compressor’s Attack time affects how quickly the compressor will activate whenever a signal peaks above the threshold. The Release time determines when the compressor stops pushing back. If these settings aren’t adjusted properly, we may hear the compressor “pump” when it turns on and off. Normally, we don’t want that to happen. A typical compressor should make the track sound smooth and relatively even. The compressor may make the track sound thicker and more full when the signal is compressed. However, too much compression will lead to Saturation and distortion. Extreme compression can be extremely fatiguing to our ears, and it will ruin the dynamic contrast in a song. The loud parts won’t sound loud and exciting when there aren’t any softer parts to create a contrast.

Special Compression Techniques

Side-Chain Compression allows us to use a signal from one track to compress the signal on another track. For example, if the bass guitar is drowning out the kick drum, we can use the kick drum to trigger a compressor on our bass track. The compressor will cause the bass guitar to get softer for a moment whenever the kick drum is struck. To do this, activate a Send on the kick track (or whatever track we want to trigger the compressor), assign it to any unused Bus (e.g. Bus 1), and raise the send fader to 0. Activate the compressor on the bass track, and look for the skeleton Key icon with the “no key input” dropdown menu: this is the key input selector. In this menu, choose the same Bus we selected earlier for our kick drum signal. In the compressor’s Side-Chain controls, select the small Key button. The compressor will now use the incoming kick drum signal to compress the bass guitar. This technique is commonly used in dance music: a kick drum will compress a synthesizer track, but the compressor’s exaggerated settings will cause the synth track to “pump” in time with the beat.

Parallel Compression blends a heavily compressed signal with a lightly compressed (or uncompressed) signal. This can have the effect of making a track sound thicker without adding excessive volume, or ruining the dynamic range of the track. To do this, we can duplicate the track, or split our track’s signal onto a separate Aux track. We can then heavily compress the second track, and gradually blend it alongside our original.

Dynamics: Expanders & Noise Gates

Compressors restrict the dynamic range of a track; Expanders do the opposite. An Expander has controls that look and function juts like a compressor. However, instead of limiting any signal that peaks above the threshold, the expander will raise them. We can use an expander to exaggerate the dynamic contrast in a track. A more extreme form of an expander is called a Noise Gate. A noise gate will suppress a signal until is gets louder than the threshold. When the signal is loud enough, the gate swings open for a set amount of time before it closes again. We can use noise gates to suppress the background noise in a track, similar to how we used Strip Silence in the editing phase. Imagine a drum set. The snare microphone may capture a lot of leakage from the kick drum, or the rest of the kit. If the snare was recorded properly, the snare’s signal will be strong, and the leakage from the kick drum will be softer. To remove the kick drum’s leakage, we could try to EQ the low frequencies away, but we could also use a noise gate, with the threshold set higher than the kick drum’s leakage, but softer than the snare drum’s level. Whenever the snare is struck, the gate will open, and we will hear the snare sound with minimal bleed from the kick drum. We can adjust the Attack, Release, and Hold settings to keep the gate open until the snare is done ringing. We could use it to cut off the snare’s ring too.

The Channel Strip plug-in combines an Equalizer, Compressor/Limiter, and Expander/Gate into one plug-in. Each module within the Channel Strip functions just like their stand-alone versions.

Reverb

When we hear someone speak or play an instrument in a room, the sound radiates out from the source. This means the sound travels straight to our ears (or microphone) as the original “dry” sound of the instrument. That same soundwave also hits the walls, and bounces back around the room: Reverberation. Eventually, this “wet” reverberation reaches our ears as well. The size, shape, and surfaces in the room will affect the sound of this reverberation. When we record an instrument with closely positioned microphones, we capture this “dry” signal, while minimizing the “wet” reverberation. A Reverb plug-in attempts to add the sound of a room back into the mix. Because we record with closely positioned microphones in isolation, we can blend in as much or as little reverb as we want. We can even change the type of room reflections we use, from a small room, to a long hall, to a massive church.

Mixing With Reverb

We usually use individual EQ & compressor plug-ins on each track, since every track requires a unique setting. We may be tempted to add a reverb plug-in onto each track, but this can be inefficient. Whether we’re mixing a dozen tracks or several hundred, adding extra plug-ins may take up more of our computer’s processing power, and some fancier reverb plug-ins can require a lot of power. Ultimately, we want the band to sound like they’re playing in the same room, and we can do this with one reverb plug-in.

First, Create one Stereo Aux Input track, and call it “Reverb.” Next, activate one of the track’s Inserts, and select a Reverb plug-in, like D-Verb. Assign the reverb track’s Input to any unused Stereo Bus (e.g. Bus 1 & 2: any busses that aren’t currently being used). The reverb track can send its output to the main mix. Next, activate a Send on every track in the session (except for the reverb track), and route this send to the same stereo Bus we used earlier. DO NOT Send the reverb plug-in back to itself: this can create a Feedback Loop. To blend some reverb into the mix, simply raise the Send Faders on the tracks that require some reverb.

Get To Work

These aren’t the only plug-ins we will use, but they’re enough to get us started. We will cover more specialized plug-ins, and some advanced techniques in the next lesson. At this point, our focus is on shaping the tone of our tracks. For now, Equalize, compress, expand, or gate the tracks that need it. Mixing can be mostly subjective. What sounds like a good tone for one style of music may not be suited for different one. Individual instruments may sound great when they’re soloed, but they could sound terrible in the mix. Try to find a nice balance that complements the other tracks. Remember to take a break, rest your ears, and check your work the next day. Just like editing, mixing can and will take some time. Get to work.

]]>
http://alangardina.com/music-265b-mixing-part-1/feed/ 0
Music 265B Week 09 Editing, Part 3 http://alangardina.com/music-265b-week-09-editing-part-3/ http://alangardina.com/music-265b-week-09-editing-part-3/#respond Tue, 27 Oct 2015 02:17:37 +0000 http://alangardina.com/?p=400 Read more

]]>
PDF version available HERE

So far, we have used the Smart Tool to cut, drag, splice, and assemble a good master performance. We used Strip Silence to cut down on unwanted leakage, and Beat Detective to clean up our performance’s rhythmic accuracy. Elastic Audio allowed us to stretch and warp our audio to fix up any additional mistakes. But what about tuning and pitch? A musician could have accidentally played the wrong note on the right rhythm, or the singer could be out of tune. We can attempt to fix some of these mistakes through an editing process known as Pitch Correction: altering the intonation on any given track.

Pro Tools has some built-in functions that let us alter a track’s pitch, but for inexperienced editors and non-musicians, identifying a “bad” note can be fairly difficult and subjective. Because of this, we will also look at a few Third-Party Plug-Ins (plug-ins that do not come standard with Pro Tools: they must be purchased separately) that make this task easier. That being said, this is where a musician’s theory, harmony, and ear-training skills come in handy.

Bad Notes

Just as we had to prepare our clips for Beat Detective, we have to do a bit of preparation for pitch correction. First, we need to identify the “bad” notes. They may be rhythmically on the grid, but the note may sound dissonant or out of place against the other members of the band. We can say the note is wrong if the player accidentally played a different pitch – they struck the wrong note, fretted the wrong string, or sang the wrong pitch in the musical scale even though the part was rhythmically accurate. For example, if a piece written is in the key of C Major, the note C# is probably a wrong note. The interval distance between the notes C & C# is called a Semitone, or half-step. In pitch correction, Coarse adjustments are usually measured in semitones.

Alternatively, the musician may have played the right note on the instrument, but the pitch may have been slightly too high (sharp) or slightly too low (flat). In pitch correction, these smaller Fine adjustments are called Cents. In western music, there are 100 cents in a semitone, and 12 semitones in an octave.

In either case, we need to identify where the trouble spots are, and determine if these bad notes can be fixed. Vocals can be especially problematic. Keyboards, guitars, brass, and woodwind instruments may have poor intonation from time to time, but they usually have a specific key, valve, fret, or string assigned to specific pitches. Because of this, they are more likely to stay on a consistent pitch. Vocalists do not have that, so they are prone to drifting around, and sliding to or from note to note – if they even land on the right note in the first place.

We may run into a few problems. Pitch correction software can easily identify and edit Monophonic material; that is any instrument or sound source that produces one note at a time, like a saxophone or a single human voice. However, most pitch correction software can’t understand Polyphonic material; that is any sound containing more than one pitch or noise. This includes everything from an instrument playing chords, to another instrument’s Leakage our track. If we don’t have proper isolation, we may not be able to correct a track’s pitch without causing other problems. Let’s say we have two recorded instruments, with one leaking into the other’s track. If one of the instruments is in tune, while the other is a half-step out of tune, altering the pitch on the out of tune track will also alter the leakage on the same track. Because of this, we may have to search through our other takes in order to find a clip of that instrument playing the correct pitch. In that case, we can just copy and paste over the bad notes. Otherwise, we may just have to find a clip that is “close enough” to the right pitch. If we have an isolated monophonic track (no chords and no leakage), we can easily correct the pitch, or alter the track to create new harmonies.

Editing With Elastic Pitch

Just as Elastic Audio can warp and stretch the rhythmic timing of an audio clip, Elastic Pitch can non-destructively alter the intonation of an audio clip. To activate it, set the track’s Elastic Audio Plug-In Selector to Polyphonic mode – it’s on the edit window, beneath the track’s name. Next, highlight and separate the note or notes we want to correct. Right click on the note, and select Elastic Properties. When our clip is selected, we can alter the intonation with the Pitch Shift settings in this Elastic Properties window – we can ignore the other settings for now. Under Pitch Shift, we can raise or lower the pitch in Semitones or Cents. Enter a value, and the clip will change pitch. How much should we adjust it? That depends. We don’t have any visual representation of the pitch, so you’ll have to rely on your ears for this method.

Harmonizing

Let’s assume our track is in tune, but we want to alter the melody or create a new harmony. For example, imagine an instrument playing a Major scale: “Do Re Mi Fa So La Ti Do.” With Elastic Pitch or any other pitch correction software, we could change this from Major to a Minor scale fairly easily. Separate and select all of the “Mi” notes, and lower them by 1 Semitone with Elastic Pitch. The scale is now in the Melodic Minor form. Next, lower the “La” notes by 1 semitone – we are now in Harmonic Minor. Lastly, lower the “Ti” by 1 semitone. We are now in Natural Minor. Like Elastic Audio, Elastic Pitch and other forms of pitch correction sound best when they are used in small increments. Extreme edits can have unnatural sounding artifacts.

Audio-Suite: Pitch

We can alter pitches in a few other ways. Elastic Audio & Elastic Pitch are non-destructive editing tools. They process our audio clips in real time, leaving the original clip unchanged: if we make a mistake, we can undo it. Audio-Suite plug-ins let us edit, and render a clip into a new audio file. Let’s go back to our original clip of the Major scale. Separate the notes into individual clips. Now, instead of editing the notes with Elastic Pitch, select the note, and choose Audio-Suite > Pitch Shift > Pitch Shift (or Pitch II) from the dropdown menus. Once again, make our adjustments: select the “Mi” note, and lower the Coarse adjustment by 1 Semitone in the Audio-Suite plug-in window. We can preview this change by clicking on the Speaker (called Preview Processing) icon at the bottom of the Audio-Suite window. If we like that change, press the Render button to turn this highlighted section into a new Audio File.

Mix Window: Plug-Ins

Audio-Suite Plug-Ins and changes made in the Edit Window will only affect specific sections of the track: we may only want to change a few notes, or change the tone of a specific clip without affecting the entire track. However, we may need to affect the entire track from time to time. In that case, we can use pitch correction plug-ins on the track’s Inserts in the Mix Window. Switch over to the Mix Window (shortcut ⌘=). On the track’s Insert, and use the dropdown menu to locate our pitch shift plug-ins. We can use the same Pitch Shift plug-ins here, but shifting the adjustments up or down by a few semitones or cents will Transpose the entire track’s pitch up or down.

Third-Party Plug-Ins: AutoTune

We can easily re-harmonize a track with the standard plug-ins and Elastic Pitch functions in Pro Tools, but fine-tuning our out of tune tracks can be a challenge. Because of this we often turn to Third-Party Plug-Ins (plug-ins that do not come standard with Pro Tools) for some of our detailed work. Antares AutoTune is a common one, known in the industry for its “T-Pain” vocal sound. This plug-in analyzes the incoming signal, and retunes the pitch to the nearest note in the musical scale. Because of this, once we dial in the right settings, we can usually let the plug-in do the rest of the work for us. Let’s take a look at some of these settings.

Input Type refers to the kind of sound source we’re dealing with: voices come in different registers from the high-pitched female Soprano & Alto to the lower male Tenor & Bass voices. There are options for standard Instruments & Bass Instruments as well. Select the appropriate Input Type for each track.

The next set of options requires a bit of music theory. We need to adjust the Key and Scale options to fit our song. For example, a piece could be written in C Major, E Harmonic Minor, or a more exotic scale. If the piece is not strictly Diatonic (only using notes in the same key/scale), then it may be easier to set our scale to Chromatic, which uses all 12 notes. Alternatively, we could program our AutoTune plug-in to Remove or Bypass specific notes in the scale.

We have a few pitch correction controls that will drastically affect our processed audio. Retune Speed adjusts the time the plug-in takes to correct the pitch. A fast speed provides more of the plug-in’s characteristic robotic and hard pitch correction, while a slower speed sounds more natural. Vibrato is the natural rapid pitch variation that singers use. We can recreate a vibrato effect in AutoTune, use the singer’s vibrato, or ignore it entirely.

This “set and forget” method is ideal when our tracks are already “close enough” to the right pitch. For sloppy parts and inaccurate singers, we need to use more extreme methods.

Third-Party Plug-Ins: Melodyne

Celemony’s Melodyne is one of the best pitch correction programs available today. Unlike most other pitch correction software, it is capable of analyzing and editing Polyphonic material. Before we use Melodyne, finish all of your edits in the Edit Window: we need to finish chopping up, moving, and fading our clips before we start using Melodyne. To use it, activate the Melodyne plug-in on the track’s first Insert. In the Edit Window, highlight the entire track, from the first clip to the last. Next, open the Melodyne plug-in and adjust our settings. Just like Elastic Audio, Melodyne uses a few different algorithms, depending on what kind of editing we plan on doing. In the Melodyne plug-in, click the Algorithm menu option, and select Melodic. Next, pick the Key & Scale type by right clicking on the Piano-Roll: the vertical pitches on the side of the editor window. Melodyne may try to analyze this by itself during the next step, but it may be incorrect depending on how badly out of tune our track is: always choose the key & scale type ahead of time when you can.

Next, press the Transfer button. When we press play, our track will now be recorded into Melodyne. This means the track will now play back from Melodyne as well: if we make any more edits in the Edit Window, they won’t be heard until we transfer them into Melodyne again. Once the track has finished playing, Melodyne will analyze the recorded audio, and organize the different pitches onto the lanes in the piano-roll. When we’re done transferring, we should see the Pitch Centers (the colorful waveforms centered around a specific note), and the Pitch Drift (the wavy line drawn through the notes).

This next task requires a bit of music theory skills. Melodyne will analyze and display what it thinks it heard. We need to play back the track in Melodyne, scan through and check for any additional mistakes. Let’s say our piece is written in the key of C Major again. If the sloppy singer sang Do & Re (the notes C & D) as two slurred 8th notes, Melodyne might incorrectly analyze this as one quarter note centered on the note C# or Db: a semitone between C & D. We will need to use Melodyne’s Note Separation Tool (the line with two arrows) to break up this note. How do we know when we have to do this? You’re a musician; use your eyes & ears.

As far as pitch is concerned, Melodyne will display the note’s current pitch center as the colored waveform. Melodyne will display where it thinks the correct pitch should be as a shaded box. We can alter the pitch in a few ways. First, click and drag to select a musical phrase: work in small sections by selecting a few notes at a time. Next, click the Correct Pitch button. A menu will appear with two sliders. Correct Pitch Center will drag the waveforms toward the shaded areas: 0% will not change anything, while 100% will center the pitch on the “correct” lane. Correct Pitch Drift will center the note’s vibrato (the wavy line) on the note. Experiment with the different settings until you find an option that works. We can undo these edits by pressing the Undo button in Melodyne (the counterclockwise arrow icon).

Alternatively, we can edit these pitches manually with the Pitch Tool (next to the main pointer tool – right click on the Pitch Tool for more options). With the Pitch Tool, simply Double Click to center the pitch on the nearest lane, or Click & Drag to move the centered pitch to a different note. Holding down the Option button while dragging will let you adjust the pitch in cents instead of semitones.

Right clicking on the Pitch Tool reveals a few more options. The Modulation Tool alters the vibrato on any given note. Double clicking on a note with the Modulation Tool will eliminate the vibrato. Clicking and dragging up will widen the vibrato, while dragging down will reduce it.

The Pitch Drift tool corrects bending pitches: if the note starts flat and ends sharp, the Pitch Drift tool will center and even out the pitch’s drift without affecting the vibrato.

If some of the notes end up sounding too processed or fake when we repitch them, the Formant Tool might be able to help correct some of this. If the note is pitched up too high, click and drag the formant bar down. If the note is pitched too low, drag the bar up.

The Amplitude Tool will raise or lower the volume of individual notes. Double click to mute, drag up to raise the volume, and drag down to lower it.

The Timing Tool works like Elastic Audio. Double click to quantize the note to the nearest grid marker, or drag it forward or backward in time to stretch & warp the note’s timing.

Rendering

When Melodyne transfers and edits our audio clips, it creates a new temporary file somewhere on the system’s hard drive. If we take our session home or to a different studio, this file may not come along for the ride. To make a permanent copy of this newly tuned track, we can simply record the output back into Pro Tools on a new track. If we are tuning a mono track, create a new mono audio track. Disable any extra plug-ins and sends on the Melodyne track: leave Melodyne active. Next, assign our Melodyne track’s output to an empty bus (one bus for mono tracks, two for stereo). Assign our new track’s input to that same bus, and arm the new track. Arm the transport and record until everything has been recorded onto the new track. After that, we can deactivate & hide the old Melodyne track. This way, we have our original content, and our newly tuned track safely archived in our session.

Recap

That brings us to the end of the editing phase. So far, we learned how to…

  1. Select, separate, slide, copy, cut, paste, and create fades.
  2. Make a composite “master” performance out of all of our recorded material
  3. Use Strip Silence to clean up our noisy tracks
  4. Use Beat Detective to quantize our audio
  5. Use Elastic Audio to warp our audio
  6. Use Elastic Pitch & AudioSuite plug-ins to harmonize our tracks
  7. Use AutoTune & Melodyne to fix our tracks’ intonation
  8. Remove the unused material from our sessions

When all of these editing tasks are done, we can move on to mixing.

]]>
http://alangardina.com/music-265b-week-09-editing-part-3/feed/ 0
Music 265B Week 08 Editing, Part 2 http://alangardina.com/music-265b-week-08-editing-part-2/ http://alangardina.com/music-265b-week-08-editing-part-2/#respond Tue, 20 Oct 2015 06:15:02 +0000 http://alangardina.com/?p=397 Read more

]]>
PDF version available HERE

Remember, during the editing phase, we try to accomplish three goals.

  1. Assemble the best performance out of all the recorded material.
  2. Fix any mistakes: correct pitch and adjust rhythmic accuracy as needed.
  3. Clean up, polish, and prepare our tracks for the mixing phase.

By now, we should be able to accomplish that first goal using the skills we developed in the previous lesson. Those are:

  1. Analyzing, selecting, and separating clips.
  2. Cutting, copying, and pasting clips into different Playlists.
  3. Moving, sliding, and nudging clips forward or backward in time.
  4. Creating fades: fade in, fade out, and crossfades.

This part of the process can be long and tedious, but it needs to be done manually. Pro Tools has some semi-automated editing functions that can do a lot of the tedious jobs for us, but we need to make sure our tracks are free of any obvious mistakes first. The computer doesn’t know if a musician played a note on the wrong beat: it can only determine that there is a transient or peak in the audio waveform near one of the beat markers on the grid. We have to decide if that part is right, wrong, or “close enough for punk rock.” Ultimately, we have to analyze, separate, move, and fade the clip into place. We may need to do this hundreds or thousands of times throughout the session. Let’s look at some of the tools that make this part of the job a little easier.

Strip Silence

Strip Silence is a non-destructive editing command in Pro Tools that functions like a noise gate. It analyzes audio clips, finds the loud peaks in the waveform, and deletes the quieter spaces (silence) in between each peak. Strip Silence is ideal for cleaning up some of the leakage in our noisy drum tracks. For example, in a typical drum recording, we have individual microphones on every part of the kit. Even with ideal microphone placement, our kick track will have a strong signal from the kick drum, with some minor leakage from the snare & toms. Our snare track will have a loud snare signal, with leakage from the rest of the kit, and so on. That leakage can either add unwanted noise to the overall drum sound, or it could be a necessary characteristic part of the drum set’s sound. If we want to remove some of it, Strip Silence is our best tool for the job.

To use Strip Silence, highlight a small section from one of our drum tracks and select Edit > Strip Silence (shortcut ⌘U). When the Strip Silence Window appears, little white boxes might form around the peaks in our audio clip. These boxes represent the parts of the clip we want to keep: we use the sliders on the Strip Silence Window to adjust the boxes. Strip Threshold determines how quiet a sound has to be in order to be considered Silence: the space outside the white box. Going back to our kick drum example, we want to adjust the Strip Threshold until the box includes only the kick drum’s peaks: we want the leakage to stay outside of the box. This can be tricky if the drummer plays with lots of dynamic contrasts: we don’t want the instrument’s softer hits to be considered part of the silence. If this is a problem, separate the softer parts of the drum tracks into smaller sections and start over.

When peaks appear close together, Strip Silence may include two or more peaks in each box. In the end, we want to have one peak per box. To fix this, move the Minimum Strip Duration slider. Sliding it all the way to zero milliseconds may create hundreds of individual boxes inside the “Silence” area, while sliding it all the way up will usually select the entire audio file. Adjust it until we see one box around each of the peaks we want to keep. In the end, we don’t want to chop off the beginning of the drum’s attack sound, and we don’t want to cut away too much of the sound’s decay at the end of the waveform. If needed, we can adjust the Clip Start Pad to move the start of the box, and the Clip End Pad to move the end of the box. Adjust all of the sliders to include the parts of the waveform we want to keep.

When we’re ready, we have a few options available at the bottom of the Strip Silence Window. The Rename button will let us rename our new clips. Extract will delete the parts of the clip within the white boxes (our peaks). We can use this to hear the “Silence” that we want to delete. If you hit extract, you can undo this by pressing ⌘Z. Separate will break the file up into individual clips: peaks within the white boxes, and silence outside. However, when we use the Strip Silence command, we usually want to use the Strip command: this will turn our peaks into individual clips, while deleting the silence. All of these functions are non-destructive, so we can undo or alter the edits at any time.

After we Strip Silence, play the clips back and confirm that we like those edits. There may be some unnatural entrances & cutoffs at the start & end of each clip, but we may be able to correct this with some fades. Rather than manually fade in & out of each clip, we can affect all of these clips at once. Highlight all of the clips, and bring up the Batch Fades menu with ⌘F. Batch Fades use the same custom fade menu found under the Edit > Fades > Create menu. Unlike individual fades, Batch Fades create multiple fades across several clips. In this menu, we can adjust the Shape, Placement, and Operation of each fade, most importantly how long the fades will be, measured in milliseconds. Once you have picked your settings, press OK to create the new batch fades. We now have individual, isolated drum hits that can be dragged around and aligned to the grid, or lined up with another player’s performance.

Beat Detective

Beat Detective is a powerful editing tool that can quantize our audio & MIDI drum tracks. Because of this, it works best when editing tracks that were recorded to a click. It can line up every peak to the nearest point on our grid. This can either tighten up the drummer’s performance, or make the player’s natural feel sound robotic depending on how you look at it. Remember, Pro Tools can’t tell if the drummer made an obvious mistake. If a drum hit was supposed to be played on the downbeat or on an odd 16th note, we need to manually fix those mistakes first until the part is at least “close enough.” With that in mind, we also need to determine what the drummer is playing: what is the smallest rhythmic value being played? Is the drummer swinging? Is the drummer playing triplets? But how do we figure all of this out? You’re a musician; use your ears.

To start using Beat Detective, highlight a small section across all of our drum tracks. Bring up the Beat Detective window by selecting Event > Beat Detective (shortcut ⌘8 on the number pad). Select the Clip Separation tab under the Operation section of the Beat Detective window. Under Selection, set the Contains dropdown menu to the smallest rhythmic value used in this performance. Does the drummer play 16th notes? If so, then choose 16th notes. What about Triplets? If yes, select the 3 option. If not, leave it unchecked. Once we have our settings, press the Capture Selection button. This will set the Start Bar/Beat & End Bar/Beat values to the current highlighted section. Next, move over to the Detection section (in Clip Separation mode). Click the Analyze button, and select Sub-Beats under the Resolution settings. Gradually raise the Sensitivity slider. As the slider moves toward 100%, lines marking what Pro Tools thinks are our drum hits will appear. A drum hit in one track will draw the line through all of the highlighted tracks. This is where Beat Detective will separate our clips in the next step. If we raise the slider too far, Pro Tools will display some false-positives: peaks that it thinks are drum hits. If the slider is set too low, it won’t select all of the hits. Adjust the slider until only our drum hits are selected. When we’re ready, press the Separate button at the bottom of the window (in Clip Separation mode).

With our clips separated, we can move on to the Clip Conform tab under the Operation section of the Beat Detective window. This Conform option will slide our clips around on the grid. The Strength slider determines how accurate the quantization will be. 100% will lock the clips to the grid, like a drum machine: the drummer’s original feel (or sloppy playing, depending on how you look at it) will be lost, and the tracks may sound like a drum machine played them. Exclude Within will preserve some of the drummer’s original hits if they were “close enough” – adjust the slider to determine how close that should be. The Swing slider will swing the performance using an 8th Note or 16th Note swing. Adjust these settings to your liking, and click the Conform button to move the clips into place. Play the section to confirm that the clips are rhythmically correct. If they are not, Undo this Clip Conform, change the settings, and try again. Even if the new clips are on time, there may still be various clicks, pops, and gaps in between these new edits. To correct this, we use the Edit Smoothing function in the Beat Detective window. This will do two things: Fill Gaps will, as the name states, close the gaps between each clip, just like we would with the Trimmer tool. Fill And Crossfade will take this one step further by adding fades & crossfades between each clip. We can adjust the Crossfade Length to our liking as well.

Once again, listen back to the edited section to make sure everything is ok. At this point, we may still have to make some manual adjustments. Go through and check each crossfade to make sure the hits we wanted to keep weren’t accidentally chopped off or hidden within a fade. We may have to use the Smart Tool to adjust the size and placement of some of the new crossfades. To avoid potential phasing issues, move the crossfades across all of the edited drum tracks together in time.

Tips and Tricks with Beat Detective

The method we just used will affect all of our drum tracks; every hit from every drum will be quantized. On the other hand, let’s assume we want to keep the drummer’s overall groove, but we want to tighten up the performance around the kick & snare hits. Instead of analyzing all of our drum tracks, we can start with just our kick & snare tracks (or any other track for that matter). Following the same procedures, we should select and analyze only those tracks. Highlight, capture, analyze, and adjust the sensitivity until only our kick & snare hits are selected. Next, select all of our other drum tracks in time, by Shift Clicking on them. DO NOT hit Analyze again. Our current kick & snare analysis markers should appear across our other drum tracks. From here, we can continue editing normally: separate, conform, and smooth out the clips. Be sure to double-check the crossfades for any unintentional cuts. If everything worked properly, we should have quantized our kick & snare, and synced the rest of our drum tracks to those edits.

Beat Detective serves some other functions as well. When we Quantize MIDI information, we have the option of locking our notes to an absolute time reference (bars, beats, 8th notes, etc), or quantizing to a set of Groove Templates – timing references found in programs like Logic, Cubase, and others. We can use these templates to quantize our notes to one of the preset grooves, or we can use our drum tracks to create a template of our own. In Beat Detective, select the Groove Template Extraction option. Follow the normal Capture & Analysis methods described above, but instead of separating and conforming our drum tracks, use the Extract option to create a new template. In the Extract Groove Template, we can leave a comment describing this new template. We can save it in one of two ways: we can save this template permanently with the Save to Disk option, or make a temporary version with Save to Groove Clipboard. With this feature, we can quantize any MIDI data in our session to this new customized drum groove. To use this template, highlight some MIDI notes and bring up the Event Operations > Quantize window with the shortcut Option 0. Under the Quantize Grid option, we can find our new template in the dropdown menu.

Beat Detective can perform one last function with the Bar/Beat Marker Generation functions. To make our lives easier when it comes to editing, we usually have the musicians record with a click track. This ensures that all of our tracks will be consistent from take to take, and they will easily sync to our tempo and grid settings. However, we still run into situations where the band did not or could not record to a click, but we are expected to fix everything as though they did, or record MIDI data on top of these inconsistent tracks. With the Bar/Beat Marker Generation functions in the Beat Detective window, we can make our session’s Tempo ruler conform to our audio tracks. Open up beat Detective, select Bar/Beat Marker Generation, Capture the Selection, Analyze, adjust the Sensitivity, and click Generate. A Realign Session warning will appear, giving us an option to move any tick based tracks. We will discuss the difference in a moment. In this case, we will use the Preserve Sample Position (Don’t Move) option. With this selected, our Tempo Ruler will conform to match the tempo shifts associated with this section. It may change from note to note, adjusting to fit the players’ inconsistencies.

Ticks & Samples

Tracks in Pro Tools have two kinds of Timebases: Sample based, and Tick based. Sample based positions are absolute time references, associated with the minutes & seconds ruler. For example, one minute into the session will always occur 60 seconds away from the start of the session. Tick based positions are relative positions, associated with the bars, beats & tempo rulers. For example, the downbeat of bar 3 can occur 4 seconds into the session, or 14, depending on the tempo. Audio tracks are usually Sample based, whereas MIDI & Instrument tracks are usually Tick based by default. We can switch between these timebases with the Timebase Selector, located beneath the track name on the edit window: Sample based tracks have a little blue clock icon, and Tick based tracks have a little green metronome icon. Click on the Timebase Selector and pick a timebase. We will use this function extensively in the next steps.

Elastic Audio

Elastic Audio is a powerful editing function that lets us speed up, slow down, stretch, and warp our audio tracks in real-time without changing pitch. We saw some similar effects with our Trimmer Tool’s TCE (Time Compression Expansion) mode, but Elastic Audio lets us do more. We can easily speed up or slow down the entire session by changing the manual tempo. First, select all of our tracks, and change their Timebase to Ticks: click on the Timebase Selector icon, and select the green metronome icon. Next to the Timebase Selector is the Elastic Audio Plug-In Selector. Click on it and select the setting that is appropriate for the current track: Rhythmic for drum tracks, Monophonic for instruments that only produce one note at a time (vocals, woodwinds, and so on), Polyphonic for instruments that can produce more than one note at a time (keyboards, guitars, and so on). The Varispeed option will behave like a tape machine: when the clip gets stretched to play slower, the pitch will go down. When it is shrunk to play faster, the pitch will get higher. We won’t use Varispeed for any of these edits. For now, stick to Rhythmic, Monophonic, or Polyphonic, depending on the type of instrument we’re editing.

Locate the MIDI Controls on the edit window, next to the Transport. If they aren’t visible, select View > Transport > MIDI Controls, and make sure MIDI controls is checked. Next, disable the Conductor Track – the button under the MIDI controls that looks like a conductor with raised arms. The Tempo Ruler should now say “Manual Tempo.” We can manually set a new tempo for the session: change the Tempo in the MIDI Controls section. Since our tracks were converted to a Tick based reference, they will stretch to match the new tempo. This is a quick way to change the overall tempo, but this can create some problems. If our tracks have a lot of leakage, then this stretching could cause some serious phasing issues between tracks. It may be necessary to use Strip Silence and other methods to mitigate some of these problems.

Elastic Audio is ideal for small adjustments. If Pro Tools thinks the audio clips are being warped too much, the clips will turn reddish-orange. However, we still need to use our ears. If we hear noticeable artifacts and distortion from this processing, undo it and try a different approach. Elastic Audio’s ideal use is for Warping audio: stretching and sliding individual transients around inside a clip. This is an excellent tool for cleaning up sloppy performances. To warp a clip, change the Track View Selector from Waveform to Warp in the dropdown menu. The clip will turn grey, and small black lines will appear on each transient in the waveform. These are Warp Markers. If needed, we can add additional warp markers by Right Clicking a point with our Selector Tool selecting Add Warp Marker.

At this point, simply clicking and dragging a warp marker will stretch the whole clip. If we only want to affect one transient/marker, we need to Double-Click on some of the warp markers in order to Lock them in place. Lock down the warp marker in front of and behind the marker that we want to edit. This will ensure that we only warp the one marker in the middle. After that, we can Click & Drag the warp marker we want to alter until it is in line with the beat. Alternatively, we can simply Shift Click on the marker we want to edit: this will lock down the marker before & after the one we want to edit.

For a practical application, let’s assume we’re done editing our drums: after using Beat Detective, the drums are set to the grid, and we’re happy with the drummer’s groove. After all of those edits, our bass track might be a little out of sync with the drums from time to time. Some notes may be on point, but others may be a too far ahead of or behind the beat. We can warp the bad notes into place by locking down the good notes, and sliding the bad ones into place. We can use a few tricks too. Elastic Audio can be Quantized, just like MIDI. Select that bass track, and activate the Warp track view. Next, bring up the Quantize window with the shortcut Option 0. From here, we can adjust our settings as needed. However, we did use Beat Detective to make that Groove Template earlier in this lesson. Go to the Quantize Grid dropdown menu, and find our groove template near the bottom of the list. Once we are done adjusting our settings, hit the Apply button to quantize the clip. Once again, listen back to the edited track, and make any adjustments as needed.

Finishing the Editing Phase

Using the skills we developed in the last lesson, we assembled the best composite performance out of all of our recorded material. Using the skills we discussed in this lesson, we can fix a lot of the minor mistakes, and adjust the rhythmic accuracy of any given track. To clean up and polish our tracks, we just need to trim away the noise & dead air, and use fades to avoid any unwanted clicks and pops as we transition from clip to clip. Once that is done, we can move on to mixing.

Clearing Clips

When we are completely satisfied with our edits, we can remove the unused material from our session. However, before we do this, be warned: if we made a mistake and need to go back and fix something, we won’t be able to retrieve a file that has been deleted. More often than not, it’s better in the long run to save everything on a hard drive until the entire project is complete. Even then, it’s still safer to keep everything archived on a hard drive. Do not clear or delete your clips unless you know what you’re doing.

Before you do anything, Save this current session. Look for the small dropdown arrow next to the Clip List. If you don’t see the Clip List, select View > Other Displays > Clip List. In the Clip List dropdown menu, choose Select > Unused (or use the shortcut Shift ⌘U) to select all of the clips that aren’t being used in the Edit Window. They will be highlighted in blue in the Clip List. Next, click on the dropdown arrow again and choose Clear (shortcut Shift ⌘B). A menu will pop up, giving us several options. Remove will unlink the clips from this session file: the files will still exist on the hard drive, but they will no longer be associated with this particular session file. Move to Trash will try to move the files into the computer’s Trash folder: you will have to manually empty the trash to delete these. Delete will permanently remove these selected clips from the hard drive: they will be destroyed.

So which one do we pick? Only Delete clips when you are absolutely certain that you will not need to recover any of them. This is a good way to clear up wasted space, but use it carefully. If we plan on sending this session off to someone else for mixing or additional recording, we may want to Clear the unused clips, and then create a new copy of the current session using the File > Save Copy In command. With this option, we can save a copy of this session along with all of the relevant audio & MIDI clips into a new folder. We can then send this copy off to someone else, or delete the original session. Whatever you do, choose carefully: you can’t go back.

]]>
http://alangardina.com/music-265b-week-08-editing-part-2/feed/ 0
Music 265B Week 07 Editing, Part 1 http://alangardina.com/music-265b-week-07/ http://alangardina.com/music-265b-week-07/#respond Tue, 13 Oct 2015 08:44:42 +0000 http://alangardina.com/?p=392 Read more

]]>
PDF version available HERE

During the Editing phase, we try to accomplish three goals.

  1. Assemble the best performance out of all of the recorded material.
  2. Fix any mistakes: correct pitch and adjust rhythmic accuracy as needed.
  3. Clean up, polish, and prepare our tracks for the mixing phase.

Let’s take a look at some of the tools & functions we will be using for the next few weeks.

The Tools

The Trim Tool is the first of our three primary tools. It has three separate functions, which can be changed by right clicking on the icon. In Standard mode, we can trim down the ends or our clips, or extend them back out if we accidentally trimmed off too much. This is useful for trimming the “dead air” out of the start and end of each take. TCE (Time Compression Expansion) mode lets us shrink or stretch time by speeding up or slowing down the clip without changing the pitch. This can be useful for making small speed adjustments, but larger adjustments will leave behind Artifacts – audible glitches and distortions left behind by the processing. Loop mode lets us copy and repeat a clip. To create a loop, first trim the clip down to an exact length – for example, four bars. Next, select loop mode, then click & drag out the end of the clip. Pro Tools will repeatedly copy and paste that clip in a continuous loop.

The Selector Tool allows us to select a specific point in time, or highlight large sections. We can make a break or separation in the clips by clicking at the desired point and using the shortcuts B or ⌘E. To highlight, we can either click and drag, or click on one point, and shift-click somewhere else to highlight everything in between.

The Grabber Tool has several modes. In its default Time mode, we can select a clip to drag it forward or backward in time, and even drag it from one track to another. In Separation mode, we can highlight a region with the Selector tool, then cut & paste it somewhere else by clicking & dragging with the Separation grabber tool. Alternatively, we could just select, cut, and paste with the selector tool. In Object mode, we can select specific clips while ignoring others by shift-clicking on them. For example, imagine we have three clips on a track. Normally, if we click on the first, then shift-click on the third, Pro Tools will highlight all three clips. In Object mode, we can click the first and shift-click the third clip, ignoring the second clip. With the first & third clips selected, we can drag those together, leaving the second clip in place.

The Smart Tool, sometimes called the Multi-Tool is a combination of the Trimmer, Selector, and Grabber tools. We can activate it by clicking on the bar above the three primary tools. The Smart Tool changes functions when we move our mouse to different areas in a clip. If we move the cursor to the ends of the clip, the tool functions like the Trimmer tool. If we move to the bottom half of the clip, the tool functions like the Grabber tool. If we move to the top half of the clip, the tool functions like the selector tool. The Smart Tool has an extra function: Fades. If we move to the upper left corner, we can click and drag to create a Fade In. The upper right corner allows us to create a Fade Out. If the clip is next to another clip, we can click and drag in either of the bottom corners to create a Cross-Fade. Fades adjust the clip’s playback volume over time. Crossfades are used to hide transitions in between edits. Always fade in, out, and between clips.

The Zoomer Tool looks like a magnifying glass. In its default mode, Normal Zoom, we can click on a clip to zoom in on that area. We can right click on the zoomer tool icon to change to Single Zoom mode. In this mode, we can click to zoom in, but Pro Tools will immediately switch back to whatever tool we had previously selected. This comes in handy for quicker edits.

The Scrubber Tool lets us Scrub a section: we can click and drag to play this section back, forward or backward at different speeds. Drag slowly for slow speed, and fast for normal speed. This can be useful for focusing in on specific noises, like pops and attacks.

The Pencil Tool serves a few different purposes. Right click on the pencil icon to see its various modes. The Pencil tool can draw automation, or redraw the audio waveform into those various shapes. For example, the Pencil tool is frequently used to remove pops and peaks in the waveform. Select the Free-Hand Pencil mode, and find a pop or click on one of our audio tracks and zoom in on that area. It might look like a sharp peak that extends beyond the range of the clip. With the pencil tool, we can redraw this peak by clicking and connecting one wave to the next. This may eliminate the popping sound, but listen for any audible artifacts.

Editing Modes

In the upper left corner of the Edit Window, we can choose between several different editing modes: Slip, Grid, Shuffle, and Spot.

Slip mode allows us to click, drag, and slide our clips around freely within the session. In slip mode, nothing will jump around and lock to a set time.

Grid mode will move things to the nearest time reference on a grid. For example, it we set our session to a 1 bar grid, our clips will jump to the nearest bar when we try slide them around. We can change the grid’s resolution and time reference with the Grid Value dropdown, next to the transport. Grids can be set to Bars & Beats (note values), Timecode (frames), minutes & seconds, and so on. This normal grid function is sometimes called an Absolute Grid, because it will lock to the nearest absolute grid reference in the session. Clicking the Grid button a second time will activate Relative Grid mode. In Relative Grid, clips can be moved to the nearest grid reference, but they will keep their relative distance from the grid. For example, imagine that a bass player played slightly behind the beat (several milliseconds behind the grid marker). If we chopped up the bass track, but wanted to keep the player’s musical timing, we could move the bass clips in Relative Grid, and still keep that several milliseconds difference.

Shuffle mode will cause clips to snap to the end of the previous clip. For example, when we highlight and delete a section in slip mode, there will be a gap left between the end of one clip and the start of the next. If we do the same thing in Shuffle mode, the start of the new clip will immediately jump to the end of the previous clip, filling in the gap. This function can be useful for editing dialogue, since we can use it to eliminate long pauses, or unwanted words, but use caution when editing music with this mode. When one clip shuffles to the end of another, it will usually cause it to be musically out of sync.

Spot mode allows us to move a clip to a specific time in the session. When we select a clip in Spot mode, the Spot Dialog window appears. We can sync this clip by spotting the start of the clip, the end, or by identifying a Sync Point. For example, if we want to sync a clip from our bass track to the downbeat of measure 3 in a session, we need to find where that note occurs in the clip. Look for the peak in the waveform on the bass track. Use the selector tool and click on that peak. Next, select Clip > Identify Sync Point (shortcut ⌘,). A little green triangle with a line will appear in the clip – this is our sync point. Next, find the downbeat of measure 3, or whatever time we wish to sync to. It may not be exactly on the grid line – we may want the track to be slightly ahead or behind the beat. Write down the desired time. Click on the bass clip again to bring up the Spot Dialog. In the Sync Point field, enter that time, and click OK. The clip will now move itself, aligning the sync point to that time.

Zoom Functions

The zoom functions are located between the tool selectors and the edit mode selectors. Horizontal Zoom lets us zoom in or out in time on the edit window. Press the left arrow to zoom out, and the right arrow to zoom in. Alternatively, we can use the shortcuts R to zoom out, and T to zoom in. The Audio & MIDI Zoom functions control the vertical zoom within our clips. Zooming in will exaggerate the peaks within a waveform, expanding them closer to the top & bottom. Zooming in too far will make the clip look like a solid block of color. Zooming out will shrink the peaks closer toward the center of the clip. These have no effect on the clip’s volume. Adjusting the actual track’s height is done elsewhere. To change track heights in the edit window, right click on the silver bar that separates the track name & meter bridge from the clips & waveforms. The numbered buttons beneath the zoom functions will adjust the edit window to various zoom presets.

Additional Editing Functions

There are several additional editing functions located beneath the tool selectors. Activate them by clicking the icon (blue is active, grey is inactive).

Zoom Toggle (shortcut E) lets us quickly zoom in on a selection with the push of a button.

Tab to Transients lets us use the Tab button to quickly jump from one peak (transient) to another within a track.

Mirrored MIDI Editing lets us change multiple identical MIDI clips (clips with the same name & ID) at the same time. For example, a song’s drum groove may use a single MIDI clip, which gets looped throughout the entire session. With Mirrored MIDI Editing active, any changes made to one clip will affect all of the other identical clips. Enable or disable this option as needed.

Automation Follows Edit will copy any written automation data attached to the clip. We will cover automation in detail during the mixing phase.

Link Timeline and Edit Selection lets us play back and forth between two separate sections. This option can be confusing for most beginners. To understand it, let’s break it down into its two components. Timeline refers to the time rulers along the top of the screen. Edit Selection refers to a selected or highlighted clip, or section in the bottom portion of the edit window. Most of the time, we want to keep these two items linked for playback purposes, but we have the option of playing back one or the other. By default, the Play button will play back the timeline selection. When these two are unlinked, we can press Option [ (left bracket) to play the Edit Selection, or Option ] (right bracket) to play the Timeline Selection. More often than not, we leave this option active.

Link Track and Edit Selection will select and highlight the track names that are currently being edited. For example, if you highlight clips across the drum tracks, Pro Tools will select those tracks as well.

When Insertion Follows Playback is selected, Pro Tools will play like a tape machine. If you start playback from the start of the session and stop one minute into the song, Pro Tools will resume playback at the one minute mark (where it left off) and so on. When Insertion Follows Playback is deactivated, Pro Tools will always play from the selected point. Stopping and replaying will start again from the same first selected point.

Scrolling

Edit Window Scrolling can be found under Options > Edit Window Scrolling. These different modes affect how the Edit Window appears during playback. In Page mode, the Edit Window will scroll over when the playback head reaches the end of the screen. Page is our default view. In Continuous mode, the playback head will stay locked in the center of the screen, and the edit window will scroll past it, like a tape machine.

Time Operations

During the session setup, we experimented with one of the Time Operations: Move Song Start to change the downbeat of bar three into the downbeat of bar one in the session. We may use the other Time Operations found under the Event > Time Operations menu. During the editing phase, it may be necessary to Insert Time or Cut Time within the session in case the song structure needs to change. For example, the producer may want to insert an 8-bar solo, or 2-bar pre-chorus into the song. We can select the bar where this section should start, and insert a few extra bars into the session. This will separate any clips at this point, and move them back a few bars. However, this only affects the current playlist – takes on other playlists won’t move.

Editing, Phase 1 – Compositing

Remember, our first goal during the editing phase is to assemble the best performance out of all of the recorded material. To do this, we grab sections and clips out of each take, and assemble them into one new Master take on a new playlist. We call this process Compositing, or Comping for short. First, save a new copy of the session. Select File > Save As and save a new version of this session with the word “Edit” tagged at the end along with the date. This gives us an original, untouched version, and our edited copy. We can always go back to the original.

Perceptive Listening

During the recording process, the producer may have picked what he or she thought to be the best take. Sometimes, the producer just has the band record multiple takes under the assumption that “We’ll fix it in Pro Tools.” In either case, we need to listen to and analyze each take for sound & performance quality. This means identifying the takes (or sections of a take) that are useable, and which ones are worthless. Even the producer’s favorite take may have rhythmic mistakes, missed notes, or distorted audio from time to time. We need to identify where those problems lie, and if other takes are available, which sections can be cut and spliced together in order to make one solid master performance. Depending on how we recorded these, we may run into some problems. If the band didn’t record to a metronome, then each take will most likely be inconsistent: some may be slightly faster or slower than others. The levels may not match up: the chorus in one take may be louder than the chorus in another, or the transition from one section to another may be different in some way. Fills, accents, and other unique elements may be inconsistent from take to take. Regardless, we still have to sit down and analyze our material.

Compositing

Once we have analyzed all of our material, we can assemble it into our master performance. To start off, make a new Playlist for each track: select the dropdown arrow to the right of the Track Name and make a new Playlist. When we assemble, we will copy and paste all of our edits into this new playlist, leaving the others intact in case we need to go back. We can start editing by selecting the best overall take: select all the clips in that take, Copy them with ⌘C, and Paste them into that track’s Master Playlist with ⌘V. Next, go through the song section by section, and track by track. Find the best intro, verse, chorus, solo, and other sections for each player. Separate each section with the shortcuts ⌘E or B, then copy & paste them into the master playlist. More importantly, find the sections and instrumental performances that fit together. There may be minor mistakes here and there, but we will address those in a moment.

“In the Pocket”

When we talk about musical timing, or playing in sync with other musicians, players often use the phrase “In the Pocket” when someone is playing in the groove, or on time with one another. One instrument defines where the beat is – we don’t hear the conductor or the metronome counting away throughout an album. The drummer is not always in charge of keeping musical time, but we can usually follow their hi-hat Think of that player’s performance as the grid markers in Pro Tools. The other players can be considered on time or in the pocket if they are playing close enough to that beat, within a certain tolerance. They can play slightly ahead, but not too far ahead of the beat, or they can “lay back” and play behind the beat to a certain extent. If they play too early, they are rushing, and if they play too late, they are dragging. We want to make our players sound like they’re playing naturally “in the pocket,” rather than exactly on the beat like a quantized drum machine. We don’t want them to sound sloppy either. Most ensemble hits are meant to be played together. In some styles, certain parts are meant to play ahead of the beat, or lay back. We can manually slide the entire section forward or backward in time, or we can separate and adjust individual notes. Rather than drag clips around manually, we can use the Nudge command to slightly bump the clip forward or backward in time. First, select a Nudge Value from the dropdown located between the Transport & the Main Counter. We can select a time increment in bars & beats, minutes & seconds, Timecode, or samples. Start with a small value, like 100 Samples under the Sample option. Select the clip we want to move, and press the or + keys on the number pad to nudge the clip forward or backward by 100 samples. The Nudge function is great for small adjustments, but it may be more efficient to simply click and drag over larger distances.

Clip Gain

Sometimes, one section or an entire take may be noticeably louder than another. In order to create a smoother transition, it may be necessary to raise or lower the volume on individual clips, without changing the overall track volume. We can do this with a feature called Clip Gain. Clip Gain lets us alter the volume on a clip without writing automation onto our tracks. To start, right click on a clip, and select Clip Gain > Show Clip Gain Line. Use the Trimmer or Smart Tool to raise or lower the clip’s overall volume, or use the Grabber Tool to create lines & peaks (gradual changes) in the clip’s gain. Right Clicking again allows us to hide the clip gain line, or to erase our changes if something went wrong.

Fades

We use Fades to generate smooth transitions from one clip to another. We Fade In at the start of every clip, Fade Out at the end of a clip, and Crossfade between two connecting clips. We can create fades using the Smart Tool: click and drag from the top corners of a clip to fade in & out, click and drag the bottom corners to create a crossfade. Alternatively, we can use shortcuts: mark the start/end of the fade with the selector tool, and press Option D to fade in, Option G to fade out. We can create customized fades by highlighting a selection and pressing ⌘F to bring up the Fades menu. This menu lets us alter the shape and intensity of each fade, among other things.

MIDI

If there are any MIDI or Instrument tracks in the session, we can bring up the MIDI Editor window with Control = or by double clicking on any MIDI clip in the edit window. MIDI data can be edited like audio. Highlighting, copying, pasting, and other commands function in the same way. We can also click and drag notes around to change pitch or timing. To toggle between the standard MIDI Editor and the Notation Display, click the button in the upper left corner that looks like a pair of musical 8th Notes. The MIDI Editor gives us a lot of powerful function through the Event Operations menu. Get familiar with the options under Event > Event Operations, like the Event Operations Window. Alternatively, we can highlight notes and right click to find the same options. These functions let us Quantize, Transpose, change duration, pitch, intensity, and alter the MIDI performance in any way imaginable. For example, Quantization lets us lock notes onto the musical Grid. We even have the option of humanizing the performance: Randomizing all of the values in a MIDI performance to create some variation, or we can even Include or Exclude notes within a certain tolerance of our grid. In any case, highlight the notes you want to alter, bring up the Event Operations, adjust your settings, and press Apply. We will learn how to perform these functions with audio in the next lesson.

]]>
http://alangardina.com/music-265b-week-07/feed/ 0