<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Tutorials Archives &mdash; AudioTechnology</title>
	<atom:link href="https://www.audiotechnology.com/category/tutorials/feed" rel="self" type="application/rss+xml" />
	<link>https://www.audiotechnology.com/category/tutorials</link>
	<description>Everything for the audio engineer, producer &#38; recording musician.</description>
	<lastBuildDate>Tue, 12 Dec 2023 04:29:43 +0000</lastBuildDate>
	<language>en-AU</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.3.2</generator>

 
	<item>
		<title>Mixing With Headphones 4</title>
		<link>https://www.audiotechnology.com/tutorials/mixing-with-headphones-4</link>
					<comments>https://www.audiotechnology.com/tutorials/mixing-with-headphones-4#respond</comments>
		
		<dc:creator><![CDATA[Greg Simmons]]></dc:creator>
		<pubDate>Wed, 29 Nov 2023 23:30:36 +0000</pubDate>
				<category><![CDATA[Issue 91]]></category>
		<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[greg simmons]]></category>
		<category><![CDATA[issue]]></category>
		<category><![CDATA[Mixing With Headphones]]></category>
		<category><![CDATA[Part 4]]></category>
		<category><![CDATA[tutorial]]></category>
		<guid isPermaLink="false">https://www.audiotechnology.com/?p=77249</guid>

					<description><![CDATA[<p> [...]</p>
<p><a class="btn btn-secondary understrap-read-more-link" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-4">Read More...</a></p>
<p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-4">Mixing With Headphones 4</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element  drop-cap" >
		<div class="wpb_wrapper">
			<p><span class="s1">In the <strong><span style="color: #333399;"><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">previous instalment</a></span></strong> we discussed useful tools for mixing with headphones, with a focus on identifying and replicating the problems that only occur when mixing with speakers. Why replicate those problems? Because compensating for them ultimately gives our speaker mixes more <i>resilience</i> (i.e. they translate better through different playback systems), and we want to build that same resilience into our headphone mixes.</span></p>
<p class="p3"><span class="s1">We also exposed the oft-repeated ‘just trust your ears’ advice for the flexing nonsense it is. Any question that triggers this unhelpful response is obviously coming from someone who cannot or does not know how to ‘trust their ears’, either through inexperience or lack of facilities. Brushing their question off with ‘just trust your ears’ is pro-level masturbation at its best. The ‘trust your ears’ advice is <i>especially</i> invalid when mixing with headphones. In that situation we cannot ‘trust our ears’ because, as we’ve established in previous instalments, headphones don’t give our ears all of the information needed to build resilience into our headphone mixes.</span></p>
<p class="p3">

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="674" height="585" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/01-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="01-pichi" fetchpriority="high" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/01-pichi.jpg 674w, https://www.audiotechnology.com/wp-content/uploads/2023/11/01-pichi-600x521.jpg 600w" sizes="(max-width: 674px) 100vw, 674px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">The previous instalment ended with a ‘Mixing With Headphones’ session template, set up and ready for mixing. In this instalment we’ll start putting that template into practice using the EQ tools it contains; in the fifth instalment we’ll look at dynamic processing (compression and limiting), and in the sixth and final instalment we’ll look at spatial processing (reverberation, delays, etc.). But first a word about ‘visceral impact’ as defined in the <strong><span style="color: #333399;"><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">second instalment</a></span></strong> of this series, and some basic mixing rules to get your mix started…</span></p>
<h4 class="p3"><strong><span class="s1">Visceral Elusion</span></strong></h4>
<p class="p3"><span class="s1">We know that with headphone monitoring/mixing there is no room acoustic, no interaural crosstalk, and no <i>visceral impact</i> to add an enhanced (and perhaps exaggerated) sense of excitement. In the <a href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2"><span style="color: #333399;"><strong>second</strong></span></a> and <strong><span style="color: #333399;"><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">third</a></span></strong> instalments of this series we discussed ways of working around the frequency response and spatial issues, but the lack of visceral impact is a trap we need to be constantly aware of – especially when first transitioning from speaker mixing to headphone mixing.</span></p>
<p class="p3"><span class="s1">A good pair of headphones can effortlessly reproduce the accurate and extended low frequency response that acousticians and studio owners dream of achieving with big monitors installed in expertly designed rooms and costing vast sums of money. However, when mixing with headphones we have to remember that those low frequencies are being reproduced directly into our ears via acoustic pressure coupling – which means we do not experience them <i>viscerally</i> (i.e. we do not feel them with our internal organs aka our <em>viscera</em>) as we do when listening through big monitors. There is no <i>visceral impact</i>, which means we must be very careful about how much low frequency energy we put into our mixes. Increasing the low frequencies until we can <i>feel</i> them is not a good idea when mixing with headphones…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="674" height="588" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/25-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="25-pichi" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/25-pichi.jpg 674w, https://www.audiotechnology.com/wp-content/uploads/2023/11/25-pichi-600x523.jpg 600w" sizes="(max-width: 674px) 100vw, 674px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">This is where the musical reference track and the spectrum analyser in our ‘Mixing With Headphones’ template are particularly valuable. If the low frequencies heard in our mix are pumping 10 times harder than the low frequencies heard in our reference track and seen on the spectrum analyser, they’re probably not right. It might be tempting to succumb to premature congratulation and declare that our mix is better than the reference because it pumps harder, but it is <i>almost certainly wrong</i>. That’s the point of using a carefully chosen reference track that represents the sonic aesthetic we’re aiming for: if our mix strays too far from the reference in terms of balance, tonality and spatiality then it is probably wrong and we need to rein it in before it costs us more time and/or money in re-mixing and mastering.</span></p>
<p class="p3"><span class="s1">How do we avoid such problems when mixing with headphones? Read on…</span></p>
<h4 class="p3"><strong><span class="s1">BASIC RULES FOR HEADPHONE MIXING</span></strong></h4>
<p class="p3"><span class="s1">The following is a methodical approach to mixing with headphones based on prioritising each sound’s role within the mix, introducing the individual sounds to the mix in order of priority, and routinely checking the affect that each newly introduced sound is having on the evolving mix by using the tools described earlier: mono switch, goniometer, spectrum analyser with 6dB guide, and a small pair of desktop monitors. This methodical approach allows us to catch problems as they occur, before they’re built into our mix and are harder to undo. The intention is to create a mix that, tonally at least, should only require five minutes of mastering to be considered sonically <i>acceptable</i>.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">Note that the methodical approach described here is not suitable for a mix that needs to be pulled together in a hurry; for example, mixing a live gig that has to start without a soundcheck or rehearsal, or dealing with advertising agency clients</span><span class="s1"> who don’t understand why a 30 second jingle takes more than 30 seconds to record and mix. In those situations ‘massage mixing’ is more appropriate, i.e. pushing all the faders up to about -3dB, getting the mix together roughly with faders and panning, focus on keeping the most important sounds in the mix clearly audible, and continue refining the mix with each pass until the gig is finished or the session time runs out. In these situations, Michael Stavrou’s sculpting analogy [as explained in his book ‘Mixing With Your Mind’] is very applicable when he advises us to “start rough and work smooth”. Get the basic shape of the mix in place before smoothing out the little details, because nobody cares about the perfectly polished snare sound if they can’t hear the vocal.</span></p>
<h4 class="p3"><strong><span class="s1">Establishing The Foundation</span></strong></h4>
<p class="p3"><span class="s1">For the strategic and methodical approach described here, start by establishing the <em>foundation sounds</em> that the mix must be built around. For most forms of popular music those foundation sounds are the kick, the snare, the bass and the vocal. Each of the foundation sounds should have what Sherman Keene [author of ‘Practical Techniques for the Recording Engineer’] refers to as ‘equal authority’ in the mix – meaning each foundation sound should have the appropriate ‘impact’ on the listener when we switch between them one at a time, and they should work together as a cohesive musical whole rather than one sound dominating the others. A <em>solid stomp</em> on the kick pedal should hit us with the same impact as a <em>solid hit</em> on the snare, a <em>firm pluck</em> of the bass guitar, and a <em>full-chested line</em> from the vocalist. Those moments should <i>feel</i> like they hit us with the same impact, and they should <em>feel</em> like they belong together in the same performance. That <i>feeling</i> is harder to sense without the visceral impact of speakers, but with a little practice and cross-referencing against our reference track we can get there.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-9876" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-9876 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >This methodical approach allows us to catch problems as they occur…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-7809" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-7809 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="917" height="645" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/02-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="02-pichi" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/02-pichi.jpg 917w, https://www.audiotechnology.com/wp-content/uploads/2023/11/02-pichi-800x563.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/02-pichi-768x540.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/02-pichi-600x422.jpg 600w" sizes="(max-width: 917px) 100vw, 917px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">Start with the most important sound in the foundation, and get that right to begin with. For most forms of popular music that will be the vocal, so start the mix with the vocal <i>only</i> and get it sounding as good as possible <i>on its own</i>, where it is not competing with any other sound sources. You may need to add one or two other tracks to the monitoring – one for timing and one for tuning – to provide a musical context for editing and autotuning but don’t spend any time on those tracks yet. Focus on getting the vocal’s EQ and compression appropriate for the performance and the genre. Aim to create a vocal track that can carry the song on its own without <i>any</i> musical backing. Create different effects and processing for different parts of the vocal performance to suit different moods or moments within the music – for example, changing reverberation and delay times between verses and choruses, using delays or echoes to repeat catch lines or hooks, and similar. Use basic automation to <i>orchestrate</i> those effects, bringing them in and out of the mix when required as shown below. Note that placing the mutes <em>before</em> the effects simplifies timing the mute automation moves and also allows each effect (delay, reverb, etc.) to play itself out appropriately instead ending abruptly halfway through – the classic rookie error.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="788" height="393" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/26-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="26-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/26-pichi.jpg 788w, https://www.audiotechnology.com/wp-content/uploads/2023/11/26-pichi-768x383.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/26-pichi-600x299.jpg 600w" sizes="(max-width: 788px) 100vw, 788px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">Once we have the vocal (or whatever is the most important sound in the mix) ready, check it in mono, check it on the goniometer and check it on the desktop monitors to make sure that the stereo effects and processors are behaving themselves. Cross reference it with the reference track in the Mixing With Headphones template to make sure it is sounding appropriate for the genre.</span></p>
<p class="p3"><span class="s1">Introduce the other foundation sounds one at a time; in this example they will be the kick, the snare and the bass. Use EQ, compression and spatial effects (reverberation, delay, etc.) to get each of these sounds working together in the same tonal perspective and dynamic perspective as the vocal, and in the desired spatial perspective against each other and against the vocal. Toggle each plug-in and effect off and on repeatedly to make sure it is making a positive difference. If not, fix it or remove it because processors that are not making a positive difference are like vloggers at a car crash: they’re ultimately part of the problem. </span><span class="s1">Check each foundation sound and its processing in mono, check it on the goniometer and check it on the desktop monitors to make sure that its stereo effects and processors are behaving themselves.</span></p>
<p class="p3"><span class="s1">With all of the foundation sounds in place we may need to tweak the levels of any spatial effects on the vocal that have become perceived differently after introducing the other foundation sounds.</span></p>
<p class="p3"><span class="s1">Orchestrate the effects for the foundation sounds (as described above for the vocal) to help each sound stand out when it’s supposed to stand out and stand back when it’s supposed to stand back, thereby enhancing its ability to serve the music.</span></p>
<p class="p3"><span class="s1">Always consider the impact each newly-introduced sound is having on the clarity and intelligibility of the existing sounds in the mix and, particularly, its impact on the most important sound in the mix – which in this example is the vocal. We should not modify the vocal to compete with the other sounds, rather, we should modify the other sounds to fit around or alongside the vocal. After all, in this example the vocal is the most important sound in the mix <i>and</i> we had it sounding right on its own to begin with. If adding another sound to the mix affects the sound of the vocal (or whatever the most important sound is), we need to make changes to the level, tonality and spatiality of the added sound. That is why we prioritised the sounds to begin with: to make sure the most important sounds have the least tonal, dynamic and spatial compromises, and therefore have the most room to move and feature in the mix.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">Note that the goal here is to fit the other sounds <i>around</i> and <i>alongside</i> the vocal, not simply <i>under</i> the vocal. Putting foundation sounds <i>under</i> the vocal is the first step towards creating a <i>karaoke mix</i> or a <i>layer cake mix;</i> more about those in the last instalment of this series…</span></p>
<p class="p3"><span class="s1">After introducing each new foundation sound to the mix be sure to check it in mono, check it on the goniometer (as described in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">previous instalment</a></strong></span>) and check it on the desktop monitors to make sure it is not misbehaving in ways we cannot identify in headphones but will become apparent if heard through speakers.</span></p>
<p class="p3"><span class="s1">With the foundation mix done, save a copy that can be returned to in case things spiral out of control. Thank me later when/if that happens…</span></p>
<h4 class="p3"><strong><span class="s1">Beyond The Foundation</span></strong></h4>
<p class="p3"><span class="s1">Introduce the other sounds one at a time, weaving each of them <i>among</i> and <i>around</i> the foundation sounds while ensuring <i>all</i> sounds remain in the desired <i>tonal perspective</i>, <i>dynamic perspective</i> and <i>spatial perspective</i> with each other. The following text describes strategies for achieving <em>tonal perspective</em>; strategies for achieving <em>dynamic perspective</em> and <em>spatial perspective</em> are discussed in the forthcoming instalments.</span></p>
<p class="p3"><span class="s1">Every new sound introduced to the mix has the potential to change our perception of the existing sounds in the mix, so check for this and process accordingly without messing with the foundation sounds. Pay careful attention to how each new sound impacts the audibility of spatial effects (reverbs, delays, etc.) that have been applied to existing sounds, and adjust as necessary.</span></p>
<h4><strong>Loud Enough vs Clear Enough</strong></h4>
<p class="p3"><span class="s1">When balancing sounds together in the mix, always be aware of the difference between “not loud enough” and “not clear enough”. Novice engineers assume that if they cannot hear something properly it is <i>not loud enough</i> and will therefore reach for the fader. More experienced sound engineers know that often the sound described as “not loud enough” is in fact <em>loud enough</em> but is not <i>clear enough</i> due to some other issue with how it fits into the mix (e.g. its tonality, its dynamics or its spatial properties). And in some cases we realise that the sound deemed <i>not loud enough</i> is actually being buried or <em>masked</em> by another sound that is <i>too loud</i> in the mix and needs to be fixed.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-7221" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-7221 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >…fit the other sounds around and alongside the vocal, not simply under it…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-2695" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-2695 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="674" height="588" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/03-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="03-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/03-pichi.jpg 674w, https://www.audiotechnology.com/wp-content/uploads/2023/11/03-pichi-600x523.jpg 600w" sizes="(max-width: 674px) 100vw, 674px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">Bring each sound up to the level it <i>feels</i> like it is supposed to be at from a performance point of view, regardless of how clear it is. We can determine if its level is right by soloing it against other sounds that are meant to have similar authority in the mix. If the sound is at the right performance level when solo’d against sounds of similar authority but is hard to hear properly in the mix, the problem is not the sound’s fader level but rather its <em>clarity</em> and/or <em>separation</em> within the mix.</span></p>
<p class="p3"><span class="s1">In most cases this means the sound’s <em>overall</em> level is correct but <em>some parts of its frequency spectrum</em> are either not loud enough or are too loud, and we need to use EQ to boost or cut <em>just those parts</em> of the sound’s frequency spectrum to make it clear enough and bring it into the correct <i>tonal perspective</i> for the mix. We’ll discuss this process later in this instalment.</span></p>
<p class="p3"><span class="s1">If it is hard to find the right level for a sound that gets too loud at some times and too soft at other times, it suggests that sound is probably not in the same <em>dynamic perspective</em> as the other sounds and will require careful compression to rein it in. (See ‘Dynamic Processing’ in the next instalment.) </span>Sometimes a sound is in the correct <em>tonal perspective</em> and <em>dynamic perspective</em> for the mix but gets easily lost behind the other sounds, or continually dominates them, due to having an incorrect <em>spatial perspective</em> (e.g. too much reverb). We use spatial processing to create, increase or decrease the sound<span class="s1">’</span>s spatial properties and thereby assist with separation (See <span class="s1">‘</span>Spatial Processing<span class="s1">’</span> in the sixth instalment of this series.)</p>
<p class="p3"><span class="s1">It’s also possible that the problem is unsolvable at the mixing level due to ridiculous compositional ideas that have since become audio engineering problems. </span><span class="s1">For the remainder of this series let’s remove that variable by assuming we’re working with professional composers who know how to build musical clarity and separation into their compositions.</span></p>
<p class="p3"><span class="s1">To solve these ‘loud enough but not clear enough’ problems we use tonal processing to adjust the balance of individual frequencies within a sound, dynamic processing to solve problems with sounds that alternate between too loud and too soft, and spatial effects to provide separation from competing sounds. Let’s start with tonal processing, or, as it is generally referred to, ‘EQ’ and ‘filtering’…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1595296124081 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner vc_custom_1595990674300"><div class="wpb_wrapper"><div id="bsa-block-970--450" class="bsaProContainerNew bsaProContainer-86 bsa-block-970--450 bsa-pro-col-1" style="display: block !important"><div class="bsaProItems bsaGridNoGutter " style="background-color:"><div class="bsaProItem bsaReset" data-animation="fadeIn" style=""><div class="bsaProItemInner" style="background-color:"><div class="bsaProItemInner__thumb"><div class="bsaProAnimateThumb" style="display: block;margin: auto;"><a class="bsaProItem__url" href="https://www.audiotechnology.com/advertise?sid=86&bsa_pro_id=749&bsa_pro_url=1" target="_blank"><div class="bsaProItemInner__img" style="background-image: url(&#39;https://www.audiotechnology.com/wp-content/uploads/bsa-pro-upload/1673239034-Korg Nautilus_PA-min.gif&#39;)"></div></a></div></div></div></div></div></div><script>
			(function($){
				function bsaProResize() {
					var sid = "86";
					var object = $(".bsaProContainer-" + sid);
					var imageThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__img");
					var animateThumb = $(".bsaProContainer-" + sid + " .bsaProAnimateThumb");
					var innerThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__thumb");
					var parentWidth = "970";
					var parentHeight = "450";
					var objectWidth = object.parent().outerWidth();
//					var objectWidth = object.width();
					if ( objectWidth <= parentWidth ) {
						var scale = objectWidth / parentWidth;
						if ( objectWidth > 0 && objectWidth !== 100 && scale > 0 ) {
							animateThumb.height(parentHeight * scale);
							innerThumb.height(parentHeight * scale);
							imageThumb.height(parentHeight * scale);
							object.height(parentHeight * scale);
						} else {
							animateThumb.height(parentHeight);
							innerThumb.height(parentHeight);
							imageThumb.height(parentHeight);
							object.height(parentHeight);
						}
					} else {
						animateThumb.height(parentHeight);
						innerThumb.height(parentHeight);
						imageThumb.height(parentHeight);
						object.height(parentHeight);
					}
				}
				$(document).ready(function(){
					bsaProResize();
					$(window).resize(function(){
						bsaProResize();
					});
				});
			})(jQuery);
		</script>						<script>
							(function ($) {
								var bsaProContainer = $('.bsaProContainer-86');
								var number_show_ads = "0";
								var number_hide_ads = "0";
								if ( number_show_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeIn(); }, number_show_ads * 1000);
								}
								if ( number_hide_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeOut(); }, number_hide_ads * 1000);
								}
							})(jQuery);
						</script>
						</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><strong><span class="s1">EQUALISATION &amp; FILTERING</span></strong></h4>
<p class="p3"><span class="s1">The use of equalisation and filtering serves three purposes in a mix: <em>correcting</em> sounds, <em>enhancing</em> sounds, and <em>integrating</em> sounds. In our ‘Mixing With Headphones’ template described in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">previous instalment</a></strong></span> we added three EQ plug-ins to each channel strip. Here’s what they’re for…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="487" height="653" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/04-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="04-pichi" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><strong><span class="s1">Corrective EQ</span></strong></h4>
<p class="p3"><span class="s1">This is used to fix fundamental problems in individual sounds and clean them up before putting them in the mix, which means we should choose a clean EQ plug-in that is not designed to impart any tonality or character of its own into the sound. The emphasis here is to use something <i>capable</i> rather than <i>euphonic</i>. A six-band fully parametric EQ with at least ±12dB of boost/cut, along with high and low pass filtering and the option to switch the lowest and highest bands to shelving, is a good choice.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="816" height="653" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/05-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="05-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/05-pichi.jpg 816w, https://www.audiotechnology.com/wp-content/uploads/2023/11/05-pichi-800x640.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/05-pichi-768x615.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/05-pichi-600x480.jpg 600w" sizes="(max-width: 816px) 100vw, 816px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">Corrective EQ is used to make an excessively dull sound brighter or to make an excessively bright sound duller, to fix sounds that have too much or too little midrange, and to fix sounds that have too much or too little low frequency energy. It is also used to remove or reduce the audibility of any unwanted elements within the sound such as low frequency rumble (high pass filter aka low cut filter, low frequency shelving cuts), hiss and noise (low pass filter aka high cut filter, high frequency shelving cuts), and unwanted ringing and resonances (notch filters, dips).</span></p>
<p class="p3"><span class="s1">The goal of corrective EQ is to create <em>objectively good</em> sounds. What is an <em>objectively good</em> sound? It’s a sound that does not contain any <em>objectively bad</em> sounds, of course. </span><span class="s1">It is hard to define what sounds are objectively ‘good’, but it’s easy to define what sounds are objectively ‘bad’.</span></p>
<p class="p3"><span class="s1">Objectively ‘bad’ sounds are resonances and rings, low frequency booms and rumbles, unwanted performance noises and sounds, hiss and noise, and similar unmusical and/or distracting elements that don’t belong in the sound <i>as we intend to use it</i>.</span></p>
<p>One of the most common applications of corrective EQ is removing unwanted low frequency energy. Most sounds contain unwanted low frequency energy <em>below</em> the fundamental frequency of the lowest musical note in the performance. It may not seem like much on any individual track but the unwanted low frequency content on each track accumulates throughout the mix, with two results. Firstly, it reduces the impact and clarity of kick drums, bass lines, low frequency drones and other sounds that are legitimately occupying that part of the frequency spectrum. Secondly, most monitoring systems are not capable of reproducing this unwanted low frequency information reliably (particularly below 70Hz), and forcing them to reproduce it affects their ability to reproduce other frequencies that are within their range – which thereby affects their ability to reproduce the mix. It<span class="s1">’</span>s like forcing one horse to pull a cart that requires two horses.</p>
<p>Strategically removing unwanted low frequency information from individual sounds brings clarity and definition to our mixes while also allowing a broader range of monitoring systems to reproduce our mixes properly. With these benefits in mind, it is always worthwhile starting any EQ process by viewing the sound on the spectrum analyser (built into the Mixing With Headphones template) and looking for activity in the very low frequencies that has no musical value. This will be low frequency activity that remains visible, whether audible or not, and can be seen bobbing up and down at the far left side of the spectrum analyser regardless of what musical parts are being played. Removing or reducing this unwanted low frequency information with a carefully-tuned high pass filter or low frequency shelving EQ (in either case pay attention to the cut-off frequency and the slope) will clean up the individual sounds <em>and</em> the mix considerably.</p>
<p class="p3"><span class="s1">We use corrective EQ to remove or significantly reduce the audibility of the objectively ‘bad’ parts of the sound, thereby leaving us with only the objectively ‘good’ parts of the sound for <i>enhancing</i> and <i>integrating</i> into our mix. As always, after applying corrective EQ we should check the results against the original sound to make sure we have made an <em>improvement</em> and not just a difference.</span></p>
<h4 class="p3"><strong><span class="s1">Enhancing EQ</span></strong></h4>
<p class="p3"><span class="s1">This is used to create <i>subjectively</i> ‘good’ sounds from the <i>objectively</i> ‘good’ sounds we made with corrective EQ as described above. What are <em>subjectively</em> ‘good’ sounds? They are sounds that contain no <em>objectively</em> bad sounds (which we removed with corrective EQ), <em>and</em> are good to listen to <em>while also</em> bringing musical value or feeling to the mix. </span><span class="s1">We can do whatever we like with the <em>objectively</em> ‘good’ sounds to turn them into <em>subjectively</em> ‘good’ sounds, as long as we don’t inadvertently re-introduce the <em>objectively</em> ‘bad’ sounds we removed with the corrective EQ.</span></p>
<p class="p3"><span class="s1">For this enhancing purpose we can use an EQ plug-in with character to introduce some euphonics into the sound. This could be a software model of a vintage tube EQ that imparts a warm or musical tonality, and/or something with unique tone shaping curves like the early Pultecs, and/or gentle Baxandall curves for high and low frequency shelving. Unlike the <em>corrective EQ</em>, the <em>enhancing EQ</em> doesn’</span><span class="s1">t need corrective capabilities (a lot of vintage EQs did not have comprehensive features), and w</span><span class="s1">e can make up for any shortcomings here by using the <em>corrective EQ</em> and the <em>integrating EQ.</em></span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="816" height="653" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/06-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="06-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/06-pichi.jpg 816w, https://www.audiotechnology.com/wp-content/uploads/2023/11/06-pichi-800x640.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/06-pichi-768x615.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/06-pichi-600x480.jpg 600w" sizes="(max-width: 816px) 100vw, 816px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">The <em>enhancing EQ</em> is where the creative aspect of mixing begins: crafting a collection of sounds that might be individually desirable but, more importantly, collectively help to serve the meaning, message or feeling of the music. One of the goals here is to bring out the musical character of each individual sound while giving it the desired amount of clarity so we can hear ‘into’ the sound and appreciate all of its harmonics and overtones, along with the expression and performance noises that help to bring meaning to the mood of the music. In other words, to <em>enhance</em> its musicality.</span></p>
<p class="p3"><span class="s1">When applying enhancing EQ try to use frequencies that are musically and/or harmonically related to the music itself. Most Western music is based around the A440 tuning reference of 440Hz, so that forms a good point of reference. </span><span class="s1">The table below shows the frequencies of the notes used for Western music based on the tuning reference of A440, from C<span class="s3"><sub>0</sub></span> to B<span class="s3"><sub>8</sub></span>. The decimal fraction part of each frequency has been greyed out for clarity and also because we don’t need <em>that much</em> precision when tuning an enhancing EQ. Integer values are accurate enough&#8230;</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="969" height="634" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/07-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="07-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/07-pichi.jpg 969w, https://www.audiotechnology.com/wp-content/uploads/2023/11/07-pichi-800x523.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/07-pichi-768x502.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/07-pichi-600x393.jpg 600w" sizes="(max-width: 969px) 100vw, 969px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">If we jot down the frequencies of the notes that exist within the scale(s) of the piece of music we’re mixing, we can lean into those frequencies when fine-tuning our enhancing EQ. For example, let’s say we had an enhancing EQ that provided a small boost at 850Hz. It’s using a frequency that does not exist in any Western musical scale that is based on the A440 tuning reference; 850Hz sits in between G</span><span class="s3"><sup>#</sup><sub>5</sub></span><span class="s1"> (830.61Hz) and A</span><span class="s3"><sub>5</sub></span><span class="s1"> (880Hz), and is therefore not a particularly musical choice. Nudging that enhancing boost <em>down</em> towards 830Hz (G<span class="s3"><sup>#</sup><sub>5)</sub></span> or <em>up</em> towards 880Hz (A</span><span class="s3"><sub>5</sub></span><span class="s1">) </span><span class="s1">will <i>probably</i> sound more musical and is, therefore, definitely worth trying.</span></p>
<p class="p3"><span class="s1">We should always nudge our enhancing EQ boosts towards frequencies that <em>do</em> exist within the scale(s) of the music we’re mixing – we wouldn’t let a musician play out of tune, so why let an enhancing EQ boost be out of tune? Likewise, we should always </span><span class="s1">nudge our enhancing EQ dips towards frequencies that <em>don’t</em> exist within the scale(s) of the music we’re mixing – if we’re going to dip some frequencies out of a sound, try to focus on frequencies that aren’t contributing any musical value. Less <em>non-musicality</em> means more <em>musicality</em>, right?</span></p>
<p>It’s also worth noting that when a sound responds particularly well to a boost or a cut at a certain frequency (let’s call that frequency <em>f</em>), it will probably also respond well to a boost or a cut an octave higher (<em>f</em> x 2) and/or an octave lower (<em>f</em> / 2). More about that shortly…</p>
<h4 class="p3"><strong><span class="s1">Integrating EQ</span></strong></h4>
<p>While <em>corrective EQ</em> and <em>enhancing EQ</em> are used for cleaning up and creating sounds, <em>integrating EQ</em> is used for combining sounds together, i.e. integrating them into a mix.</p>
<p class="p3"><span class="s1">Creating good musical sounds with <em>enhancing EQ</em> is fun and satisfying, and might even be inspiring, but we must be constantly aware of how those individual sounds will interact when combined together in the mix. It’s common to have, for example, a piano and a strummed acoustic guitar that sound great individually but create <em>sonic mud</em> when mixed together because there is <em>too much overlapping harmonic similarity</em> between them. They are both using vibrating strings to create their sounds and therefore both have the same harmonic series, which makes it harder for the ear/brain system to differentiate between them if they’re </span><span class="s1">playing similar notes and chords.</span></p>
<p class="p3"><span class="s1">Another form of <em>sonic mud</em> occurs when composers create music using sounds from different sample libraries and ‘fader mix’ them together. Because each individual sample sounds great in isolation, the assumption is that simply fader mixing them together will sound even greater. That is like pouring a dozen of our favourite colour paints into a bucket and giving it a stir on the assumption it will create our ‘ultimate’ favourite colour. What do we get? A swirling grey mess, <i>every single time</i>, and it’s the same when mixing a collection of individually enhanced sounds.</span></p>
<p class="p3"><span class="s1">That’s what <i>integrating EQ</i> is for: helping us to integrate – or ‘fit’ – the individually enhanced EQ sounds together into a mix or soundscape, ensuring they all work together while remaining clear and audible. As with our choice of <em>corrective EQ</em>, the <em>integrating EQ</em> should be a clean plug-in that does not impart any tonality or character of its own. A six-band fully parametric EQ with high and low pass filtering and the option to switch the lowest and highest bands to shelving is a good choice here.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="816" height="653" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/08-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="08-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/08-pichi.jpg 816w, https://www.audiotechnology.com/wp-content/uploads/2023/11/08-pichi-800x640.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/08-pichi-768x615.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/08-pichi-600x480.jpg 600w" sizes="(max-width: 816px) 100vw, 816px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">We use <em>integrating EQ</em> to <i>maintain</i> clarity and tonal separation within a mix. We listen to how our <em>enhanced EQ</em> sounds affect each other when introduced to the mix, and we make appropriate tweaks with <em>integrating EQ</em> to fix any conflicts and restore the preferred elements of each sound. How?</span></p>
<h4><strong>INTEGRATING EQ EXAMPLE</strong></h4>
<p class="p3"><span class="s1">Let’s go back to the earlier example of the piano and the strummed acoustic guitar, where each instrument sounded good on its own but both instruments lost clarity and tonal separation when mixed together. </span><span class="s1">Imagine the piano and the acoustic guitar have been loaded into our Mixing With Headphones template. </span><span class="s1">Using the individual channel solo buttons along with the spectrum analyser on the mix bus allows us to examine the frequency spectrums of the piano and the acoustic guitar individually. Conflicts between their frequency spectrums can be identified by temporarily adjusting both sounds to the same perceived loudness,</span><span class="s1"> then alternating between soloing each sound individually and soloing both simultaneously.</span></p>
<p class="p3"><span class="s1">For this example let’s say that, due to our clever use of <em>corrective EQ</em> and <em>enhancing EQ</em>, both sounds are full-bodied and rich but therein lies the first problem: </span><span class="s1">they’re both competing for our attention in the midrange. </span><span class="s1">That means we have to apply <em>integrating EQ</em> with the goal of making them <em>work together</em> in the midrange.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="261" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/09-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="09-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/09-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/09-pichi-600x218.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">We start by prioritising each competing sound based on its musical and/or textural role in the mix. </span><span class="s1">We want to use minimal <em>integrating EQ</em> on foundation sounds and featured sounds that play musically significant parts, preserving the musicality and tonality that we’ve already highlighted in those sounds with the <em>enhancing EQ</em>. Textural sounds and background sounds are more forgiving of tonal changes so it is smarter to apply any significant <em>integrating EQ</em> changes to those sounds. </span><span class="s1">Let’s examine the roles of the acoustic guitar and the piano in this particular piece of music to prioritise them accordingly.</span></p>
<p class="p3"><span class="s1">Although the rhythm and playing of the acoustic guitar is helping the drums and bass guitar to propel the music forward, it is never actually featured in the mix of this piece of music. Therefore its primary purpose is <em>textural</em>; it provides a gap-filling layer of musical texture in the background. </span><span class="s1">We can use a lot of <em>integrating EQ</em> here if we have to, as long as it doesn’t interfere with the acoustic guitar’s <em>textural</em> role.</span></p>
<p class="p3"><span class="s1">What about the piano? In this piece of music, the left hand is playing a <em>textural</em> role with gentle low chords that complement the bass guitar and thicken the acoustic guitar. The right hand, however, is playing a <em>musically significant role</em> by adding sharply punctuating chords along with short melodies that fill the spaces between vocal lines, and those melodies often conflict with the acoustic guitar. </span><span class="s1">These observations tell us that we can manipulate the piano’s lower frequencies (left hand, textural) as required to make it work in ensemble with the bass guitar and the guitar, but we need to be very conservative with any EQ applied to the midrange (right hand, musically significant) to avoid altering the tonality of the punctuating chords and short melodies.</span></p>
<p><span class="s1">Having established that the acoustic guitar’s tonality has a lower priority than the piano’s tonality in this piece of music, it is an appropriate starting place for applying integrating EQ.</span></p>
<p><span class="s1">Let’s make a clarifying dip in the acoustic guitar’s spectrum, right where the two sounds share overlapping peaks in their spectrums – which is almost certainly the cause of the problem.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="235" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/10-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="10-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/10-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/10-pichi-600x197.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">This upper midrange dip will change the tonality of the acoustic guitar, of course. Fortunately, in this case it will make it more subdued and appropriate for the background textural role it plays in the mix. More importantly, however, it will contribute to the overall clarity of the mix by creating room for the piano without altering the piano sound itself.</span></p>
<p><span class="s1">To fine-tune the depth of the dip (i.e. how many dB to cut) and the width of its bandwidth (Q), we should switch the applied EQ in and out while soloing the guitar and piano separately and together and also checking the results on the spectrum analyser. We want to dip just enough out of the acoustic guitar to leave room for the right hand parts of  the piano to be heard clearly, but no more.</span></p>
<p><span class="s1">We might find that the required depth and bandwidth of the dip has improved the clarity of the piano within the mix, but the acoustic guitar has become less interesting. </span><span class="s1">We can musically compensate for this change in the acoustic guitar’s tonality by adding small boosts an octave above and below the dipped frequency in the acoustic guitar’s spectrum as shown below.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="235" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/11-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="11-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/11-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/11-pichi-600x197.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Again, switching the EQ on and off while soloing the instruments individually and together while watching the spectrum analyser will help us get the settings just right. [The <em>compensating EQ</em> technique shown above can be applied whenever a sound has been given a necessary integrating EQ peak or dip that subtracts some of that sound<span class="s1">’</span>s musicality: add small boosts an octave either side of any significant dips, and small cuts an octave either side of any significant boosts.]</p>
<p>The process of applying integrating EQ and compensating EQ to the acoustic guitar might reveal other areas worth working on. For example, let’s say this ‘soloing with spectrum analysis’ process revealed some upper harmonics in the piano sound that were worth bringing out. Applying a small <em>integrating EQ</em> dip in the acoustic guitar’s spectrum will create room for those upper harmonics of the piano to shine through, and applying a small <em>compensating EQ</em> boost in the acoustic guitar<span class="s1">’</span>s spectrum an octave higher will do the same for the acoustic guitar’s upper harmonics.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="235" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/12-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="12-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/12-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/12-pichi-600x197.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">All of the integrating and compensating EQ changes detailed above have improved the clarity of, and separation between, the acoustic guitar and the <em>right hand parts</em> of the piano by focusing on their respective textural and musical roles. We’ve already established that the <em>left hand parts</em> of the piano were playing a textural role, as is the acoustic guitar, so let’s see how they sit alongside one of the mix’s foundation sounds that shares some of the same spaces within the frequency spectrum: the bass guitar.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="234" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/27-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="27-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/27-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/27-pichi-600x196.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Adding the bass to the piano and acoustic guitar, and switching/solo’ing between them, shows that there is some worthwhile upper harmonic detail in the bass sound that fits nicely into a natural dip in the piano’s spectrum but is being masked by one of the complementary EQ boosts we added previously to the acoustic guitar. Because the bass is a foundation sound that we’ve already got sounding right within the foundation mix, we want to avoid altering it if possible because it is one of the internal references for our mix. Rather than boosting the upper harmonics of the bass, we’ll reduce the complementary boost added previously to the acoustic guitar just enough to allow those upper harmonics of the bass to be audible again – as shown below.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="234" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/13-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="13-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/13-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/13-pichi-600x196.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This process has also revealed that between the bass, the left hand of the piano and the acoustic guitar there is more low frequency <em>bloom</em> in the mix than we’d like. It’s not necessarily boomy or wrong, but it is bordering on sounding <em>bloated</em> and <em>muddy</em> in the low frequencies – especially when compared to the low frequencies in the reference track we added to the Mixing With Headphones template before starting this mix. We don’t want to change the low frequencies in the bass because we got them right when establishing the foundation mix, and we know that any alterations to the foundation mix are likely to result in a ripple of changes throughout the mix. In this example, reducing the low frequencies of the bass to minimise the risk of the mix sounding ‘bloated’ will make the kick sound as if it has too much low frequency energy <em>or</em> is generally too loud. This will lead us to make changes to the kick, and the domino effect will topple through the mix from there.</p>
<p>Because we’ve been working with the acoustic guitar so far, we’ll start there by adding a subtle low frequency shelf or a gentle high pass filter to pull down its low frequencies just enough to clarify what is happening between the bass guitar and the left hand of the piano parts.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="234" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/14-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="14-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/14-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/14-pichi-600x196.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>From here we can see the left hand parts of the piano sound, which are playing a textural role, are the remaining cause of the excessive bloom. We can wind them back with some low frequency shelving or a gentle high pass filter on the piano’s <em>integrating EQ</em>.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="234" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/15-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="15-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/15-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/15-pichi-600x196.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>While we’re working on the piano, let’s bring out those upper harmonics we revealed earlier by adding a small boost in the piano in the same area we previously made a small dip in the acoustic guitar.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="234" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/16-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="16-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/16-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/16-pichi-600x196.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>All of the <i>integrating EQ</i> and <em>compensating EQ</em> changes detailed above have resulted in the bass, piano and acoustic guitar sitting together clearly and musically in the mix, and yet most of that improvement was achieved by making changes to the lowest priority sound of the three: the acoustic guitar. Its entirely textural role in the mix made it the most sensible choice to make <em>integrating EQ</em> changes to. Two very subtle changes were made to the piano to improve its placement in the mix, and no changes were made to the bass guitar – which is in keeping with our goal of using the foundation sounds as a point of reference to build the mix around.</p>
<p class="p3"><span class="s1">We should not get too hung up on soloing the acoustic guitar and worrying about how its sound has been changed by the EQ when heard in isolation. The reality in this situation is that i</span><span class="s1">t doesn’t matter what the acoustic guitar sounds like <em>in isolation</em> (i.e. when solo’d) because </span>the listener <em>is never going to hear it in isolation</em>, and that’s because it is never featured in the music. It remains a background textural sound. Therefore the only thing that really matters beyond its musicality is how it affects other sounds in the mix. In this example, the applied integrating EQ has allowed the guitar to sit nicely <em>behind</em> the piano rather than <em>under</em> it. As we will see in the following illustrations, the acoustic guitar’s spectrum (and therefore its tonality) has been altered to allow it to fulfil its spectral role in the mix: filling in the spaces between the other instruments.</p>
<p>The illustration below adds the kick drum’s spectrum (shown in orange) to the image so we can see how it works with the bass and the piano.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="261" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/17-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="17-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/17-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/17-pichi-600x218.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>As shown above, we have used the strategic application of integrating EQ to alter the perceived volume of each sound in the mix by a combination of boosting important parts of a given sound’s spectrum and/or cutting parts out of competing sounds’ spectrums, rather than making global ‘brute force’ fader changes. Each of these sounds was already <em>loud enough</em> in the mix, it just wasn’t <em>clear enough</em>, and we’ve used integrating EQ to clarify it.</p>
<p>The illustration below shows the spectrums before any integrating EQ was applied. There is too much overlap in significant parts of each sound’s spectrum, resulting in a poor mix that is lacking in clarity and tonal separation.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="261" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/18-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="18-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/18-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/18-pichi-600x218.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4><strong>Complementary EQ, Shared EQ &amp; Opposite EQ</strong></h4>
<p>In the <em>integrating EQ</em> example given above we introduced the concept of <em>complementary EQ</em>, where an EQ cut was accompanied by complementary boosts applied to the same sound, typically an octave (or other harmonically valid interval) above and/or below the centre frequency of the cut. If the cut was, say, -2dB at 880Hz, the complementary boosts would be placed at 440Hz (an octave below 800Hz) and 1.76kHz (an octave above 880Hz).</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="295" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/19-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="19-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/19-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/19-pichi-600x247.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Each boost would have the same bandwidth (Q) as the cut, and each boost would start at +1dB with the intention of collectively returning the 2dB that was lost to the cut (conceptually aiming to maintain the same overall energy in the signal but redistributing it within the spectrum). However, the amount of boost and the choice of frequencies will ultimately be decided by ear, because nobody cares about the theory if the end results don’t sound good.</p>
<p>Sometimes an <em>integrating EQ</em> dip has an adverse affect on the sound it is applied to, and in those situations we can resort to <em>shared EQ</em>. The <em>integrating EQ</em> example shown above started by placing a dip in the acoustic guitar’s frequency spectrum to clarify the piano sound. Let’s say the dip needed to be -3dB at 880Hz with a bandwidth (Q) of 1.5 in order to do its job, but the change to the acoustic guitar’s tonality was more than we were willing to accept. In this situation we can copy that same EQ on to the piano, and <em>share</em> the 3dB difference between the two instruments. For example, perhaps a dip of -2dB is acceptable on the acoustic guitar, and we can make up the difference with a +1dB boost in the same part of the spectrum on the piano without adversely affecting its tonality. Now we have created the same 3dB difference at 880Hz required between the piano and the acoustic guitar, but have changed it from one large EQ change on one instrument to two smaller EQ changes shared between two instruments.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="295" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/20-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="20-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/20-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/20-pichi-600x247.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">When using <em>integrating EQ</em> with sounds that are competing with each other but not in any clearly obvious or significant manner, </span><span class="s1">it’s worth taking advantage of differences between each sound’s frequency spectrum by using <em>opposite EQ</em>. For the <em>integrating EQ</em> example used earlier we saw that the piano sound had a peak in the upper range of its spectrum where the acoustic guitar did not, and we put a subtle dip of matching bandwidth (Q) at that place in the acoustic guitar’s spectrum to increase the separation between the two sounds. This use of <em>integrating EQ</em> might have no significant effect on the acoustic guitar’s sound (perhaps the acoustic guitar doesn’t contain much musical value in that area) but creating the dip will further separate the two sounds while bringing the piano forward in a way that sounds better than boosting the peak on the piano’s spectrum – which might sound unnatural or perhaps even make certain notes ‘ping’ out (for which the piano tuner ultimately takes the blame).</span></p>
<h4 class="p3"><strong><span class="s1">TONAL PERSPECTIVE</span></strong></h4>
<p class="p3"><span class="s1">After applying <em>corrective EQ</em>, <em>enhancing EQ</em> and <em>integrating EQ</em>, it is always important to check the <i>tonal perspective</i> of each sound. Does the tonality of each individual instrument sound as if it belongs in the same mix as the other instruments?</span></p>
<p class="p3"><span class="s1">It is easy to lose track of tonal perspective and end up with one or two sounds that are very good when heard in isolation while also maintaining clarity and separation in the mix, but <em>they don’t sound as if they belong in the same mix</em>, e.g. they’re considerably brighter or duller than the other sounds. They are not in the mix’s <em>tonal perspective</em>.</span></p>
<p class="p3"><span class="s1">This is the same problem that happens when we start combining sounds from different sample libraries, as mentioned earlier. Each sample library brand has their own sound engineers, producers and mastering engineers, therefore each sample library brand evolves its own ‘sound’ in the same way that some sound engineers, producers and boutique record labels evolve their own ‘sound’. The samples might <i>all</i> sound good individually, but there’s no guarantee (or likelihood) that samples from different sample library brands will work together without some kind of <em>integrating EQ</em>. It’s like sending all of the drum tracks but nothing else to one engineer to mix and master, all of the guitar tracks but nothing else to another engineer to mix and master, and all of the vocal tracks but nothing else to another engineer to mix and master – each engineer might do a great job on their parts, but there is no guarantee <i>or</i> likelihood that the individually mixed and/or mastered stems will automagically work together when combined in a mix. The individual sounds need to be tailored to fit together using <em>integrating EQ</em>, not simply layered on top of each other using fader levels.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="674" height="588" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/21-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="21-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/21-pichi.jpg 674w, https://www.audiotechnology.com/wp-content/uploads/2023/11/21-pichi-600x523.jpg 600w" sizes="(max-width: 674px) 100vw, 674px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">When EQing individual sounds within a mix we must ensure they all sound as if they belong in the same mix, i.e. they have the same <i>tonal perspective</i>. If one sound proves to be overly bright or overly dull within the mix it should be fixed in the mix, because fixing it later is going to take more time in re-mixing and/or more cost in mastering.</span></p>
<p class="p3"><span class="s1">After introducing any significant EQ changes to a sound – whether they’re <em>corrective</em>, <em>enhancing</em> or <em>integrating</em> – always solo the sound and switch the EQ in and out while checking on the spectrum analyser and the 6dB guide to make sure the sound’s tonality is behaving itself and not steering the mix towards being too bright or too dull.</span></p>
<p>By following the strategic step-by-step process demonstrated in this instalment, i.e. introducing one instrument at a time to our mix and checking it on the tools built into the Mixing With Headphones template, we can make high quality mixes in headphones that, <em>tonally</em> at least, should land within five minutes of mastering to sound acceptable.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"><h2 style="color: #d56d2e;text-align: left;font-family:Source Sans Pro;font-weight:700;font-style:italic" class="vc_custom_heading" >Next instalment: Dynamic Perspective. Coming soon…</h2><div class="vc_empty_space"   style="height: 24px"><span class="vc_empty_space_inner"></span></div></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1699314534051 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><strong><span class="s1">EQUAL LOUDNESS COMPENSATION</span></strong></h4>
<p class="p3"><span class="s1">In the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">second instalment</a></strong></span> of this six-part series we looked at the Equal Loudness Contours and saw how our hearing’s sensitivity to different frequencies changes with loudness. Here are those Equal Loudness Contours again…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="671" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/22-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="22-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/22-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/11/22-pichi-600x593.jpg 600w, https://www.audiotechnology.com/wp-content/uploads/2023/11/22-pichi-100x100.jpg 100w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>

	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">As we learnt in the second instalment, reducing the sound’s SPL means our hearing becomes less sensitive to the lower and higher frequencies compared to the mid frequencies of that sound. This doesn’t only affect our perception of the overall mix’s tonality, it also affects our perception of individual sounds <i>within</i> a mix. If a sound has been put into the correct tonal perspective with the other sounds but <i>then</i> turned down significantly in the mix to play an atmospheric background role, it has been shifted down to a lower Equal Loudness Contour than the other sounds and will therefore sound duller and lacking in low frequencies compared to the other sounds in the mix; it is no longer in the same <i>tonal perspective</i> and will easily get lost in the mix at times. A small EQ boost in the very high frequencies (above 8kHz) and the low frequencies (below 250Hz) can help these lost sounds remain clear and audible within the mix while retaining their tonal perspective. If an individual sound in the mix is intended to fade out to silence, consider automating a small high and low frequency boost that subtly <em>increases</em> as the sound’s level <em>decreases</em> in order to maintain its clarity as it fades out.</span></p>
<h4><strong>Radiant Fade Away</strong></h4>
<p>If we are working on a mix that has a long fade out – the kind where the music has been recorded beyond the intended fade out – we can take Equal Loudness Compensation one step further by applying a subtly increasing boost of high and low frequencies (i.e. fractions of a dB below 250Hz and above 8kHz) over the mix bus for the duration of the fade out. This maintains the mix<span class="s1">’</span>s tonal perspective and clarity all the way down to silence, and can have an excellent effect when the intention is to <em>fade out</em> the mix rather than <em>dull out</em> the mix.</p>
<p>The EQ curve shown below, based on the Equal Loudness Contours shown throughout this series, is the compensation curve required for a mix made at 80 Phons (the recommended monitoring level for mixing) to sound tonally correct if replayed at the Threshold of Audibility (0 Phons, or silence).</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="746" height="270" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/23-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="23-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/23-pichi.jpg 746w, https://www.audiotechnology.com/wp-content/uploads/2023/11/23-pichi-600x217.jpg 600w" sizes="(max-width: 746px) 100vw, 746px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The concept is simple: we apply an EQ curve like this over the mix bus, and automate it through the duration of the fade out (automating the EQ<span class="s1">’</span>s <em>blend</em> or <em>mix</em> control) so that all of the EQ settings are at 0dB at the start of the fade but have reached the levels shown on the curve by the end of the fade – as shown in the illustration below. This maintains a more consistent tonal balance as the mix <em>fades out</em> rather than <em>dulls out</em>. The mix continues to shine, all the way down to silence.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="826" height="650" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/24-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="24-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/24-pichi.jpg 826w, https://www.audiotechnology.com/wp-content/uploads/2023/11/24-pichi-800x630.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/24-pichi-768x604.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/24-pichi-600x472.jpg 600w" sizes="(max-width: 826px) 100vw, 826px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The <span class="s1">‘</span>radiant fade away<span class="s1">’</span> technique keeps hooks and choruses audible for longer throughout the fade out, maintaining the listener<span class="s1">’s attention and keeping the music alive in their mind long after the end because the mix faded out but never <em>dulled out</em> as most mixes do; it doesn’t follow the traditional ‘end of song’ tonal trajectory.</span></p>
<p><span class="s1">As with all long fade outs, we must always keep it musically timed – meaning the last <em>clearly audible and identifiable note</em> at the end of the fade-out is also the last note of a measure, and the fade out reaches silence just before the first note of the next measure begins.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1699314690493 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><strong><span class="s1">THE MOST IMPORTANT SOUND IN THE MIX</span></strong></h4>
<p class="p3"><span class="s1">There is often a difference between what the composers, musicians, producers and sound engineers think is the most important sound in a mix, and what the listeners think is the most important sound in the mix.</span></p>
<h4 class="p3"><strong><span class="s1">Significance</span></strong></h4>
<p class="p3"><span class="s1">Ask someone who isn’t a composer or musician to sing their favourite song. They will sing the vocal lines, and in between they will mimic drum fills, instrumental solos, echo effects or <i>whatever</i> grabs and holds their attention in between the vocal lines. However, they will always jump straight back to the next vocal line <i>without missing a word</i>, meaning the vocal takes a higher priority in their perception than anything else in the mix. They’re the people <i>buying</i> the music, and they don’t care what the composer, musician, producer or engineer thought was the most important sound when starting the mix. All that matters to the music consumer is what sticks in their mind, what they look forward to hearing again, and what ultimately pushes them to a purchasing decision. This ‘sing your favourite song’ exercise tells us a lot about which parts of a mix are the most important to the listener. If there are vocals, it is invariably the vocals…</span></p>
<h4 class="p3"><strong><span class="s1">Insignificance</span></strong></h4>
<p class="p3"><span class="s1">Always remember that only guitarists, drummers and sound engineers <i>actually care</i> about how great the guitar or the snare sound is, and only guitarists, drummers and sound engineers buy recordings simply because they have a great guitar or snare sound. To everyone else those things are just another component of the mix with varying levels of importance. They’re not worth sacrificing the first hour of a three hour mix session for; as long as they serve their role in the music without distraction, listeners will simply assume that the sounds heard in the mix are the sounds that the artist intended. In contrast, the voice is something <i>everyone</i> can play (whether singing or talking), and <i>all</i> listeners will notice a poor vocal sound. Spending the first hour of a three hour mix getting the vocal right is a smarter use of time then spending it on the guitar or snare sound.</span></p>
<p class="p3"><span class="s1">The same logic and thinking can be applied to instrumental music; focus on what holds the listener’s attention, and make sure there is <i>always</i> something to hold the listener’s attention – if there’s nothing in the music at a given time, fill the space with an echo or similar effect. It is up to the composer and musicians to provide the notes, and the engineer to deliver those notes with clarity and separation while also using the gaps between the notes as required and/or appropriate.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div>
</section><p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-4">Mixing With Headphones 4</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.audiotechnology.com/tutorials/mixing-with-headphones-4/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Producing Gretta Ray’s Vocals</title>
		<link>https://www.audiotechnology.com/tutorials/producing-gretta-rays-vocals</link>
					<comments>https://www.audiotechnology.com/tutorials/producing-gretta-rays-vocals#respond</comments>
		
		<dc:creator><![CDATA[Christopher Holder]]></dc:creator>
		<pubDate>Tue, 21 Nov 2023 00:34:14 +0000</pubDate>
				<category><![CDATA[Issue 91]]></category>
		<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[Gretta Ray]]></category>
		<category><![CDATA[issue]]></category>
		<category><![CDATA[Positive Spin]]></category>
		<category><![CDATA[recording]]></category>
		<category><![CDATA[vocals]]></category>
		<guid isPermaLink="false">https://www.audiotechnology.com/?p=79508</guid>

					<description><![CDATA[<p> [...]</p>
<p><a class="btn btn-secondary understrap-read-more-link" href="https://www.audiotechnology.com/tutorials/producing-gretta-rays-vocals">Read More...</a></p>
<p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/producing-gretta-rays-vocals">Producing Gretta Ray’s Vocals</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_animate_when_almost_visible wpb_fadeInUp fadeInUp wpb_column vc_column_container vc_col-sm-2 vc_col-has-fill"><div class="vc_column-inner vc_custom_1700459916197"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="400" height="400" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/Positive-Spin_Album.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="Positive-Spin_Album" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/Positive-Spin_Album.jpg 400w, https://www.audiotechnology.com/wp-content/uploads/2023/11/Positive-Spin_Album-300x300.jpg 300w, https://www.audiotechnology.com/wp-content/uploads/2023/11/Positive-Spin_Album-100x100.jpg 100w" sizes="(max-width: 400px) 100vw, 400px" /></div>
		</figure>
	</div>

	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p style="text-align: center;"><strong>Artist:</strong> Gretta Ray<br />
<strong>Album:</strong> <em>Positive Spin</em></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element  drop-cap" >
		<div class="wpb_wrapper">
			<p class="p1">&#8216;Positive Spin&#8217; is a pop album that provides a fitting platform for Gretta Ray’s stunning vocal delivery. Her lyrics are intricate, personal and presented in pristine detail. The vocal production needs to keep up. Gretta’s close-knit production team had a choice: go hard in a commercial studio for three or four days, or take a more leisurely route from the team’s production suites in Collingwood. They chose the more chill approach, but it didn’t come without its challenges. Producer/engineer Hamish Patrick picks up the story:</p>
<h4 class="p1"><strong>THE BRIEF</strong></h4>
<p class="p1">Hamish Patrick: Gretta’s lyrics are everything. She’s invested so much into the emotion of those lyrics. So being comfortable and delivering those lyrics with full emotional honesty is really important.</p>
<p class="p1">Gretta also wanted this album to be a full-blown pop album. It would mean <cite><strong style="background: #ebbeb9; color: #000000;">the vocals would be bright, loud, compressed and impossible to ignore.</strong></cite> This meant we knew as a production team we would need to produce a vocal that sounded as consistent as possible — no variation in tonal quality within a song or between songs. In other words, we had to lock down every possible source of variation. It would start with the recording space.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_row-o-equal-height vc_row-flex"><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-6 vc_col-has-fill"><div class="vc_column-inner vc_custom_1701059581733"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="747" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/1.-Full-session-Light-On-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="1.-Full-session---Light-On-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/1.-Full-session-Light-On-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/1.-Full-session-Light-On-pichi-800x415.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/1.-Full-session-Light-On-pichi-768x398.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/1.-Full-session-Light-On-pichi-600x311.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
<div class="aio-icon-component    style_1"><div id="Info-box-wrap-9941" class="aio-icon-box default-icon" style=""  ><div class="aio-icon-default"><div class="ult-just-icon-wrapper  "><div class="align-icon" style="text-align:center;">
<div class="aio-icon none "  style="color:#333;font-size:24px;display:inline-block;">
	<i class="icomoon-arrow-right-fill"></i>
</div></div></div></div><div class="aio-icon-header" ><h4 class="aio-icon-title ult-responsive"  data-ultimate-target='#Info-box-wrap-9941 .aio-icon-title'  data-responsive-json-new='{"font-size":"","line-height":""}'  style="">FULL VOCAL SESSION FOR ‘LIGHT ON’</h4></div> <!-- header --><div class="aio-icon-description ult-responsive"  data-ultimate-target='#Info-box-wrap-9941 .aio-icon-description'  data-responsive-json-new='{"font-size":"","line-height":""}'  style=""></p>
<p class="p1">One of the most intensely edited and arranged tracks on Positive Spin. This is the very end of the process, where all Melodyne instances have been committed, and the top overlapping lead vocal tracks from individual sections (verse, chorus, etc) have been committed into one track for export to the mixer (LV PRINT). BVs have been consolidated into as few tracks as possible to make life easier for the producer, and tuning is committed without any specific processing beyond some RX processing (de-clicking and de-plosive in this case).</p>
<p></div> <!-- description --></div> <!-- aio-icon-box --></div> <!-- aio-icon-component --></div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft wpb_column vc_column_container vc_col-sm-6 vc_col-has-fill"><div class="vc_column-inner vc_custom_1701059587746"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="747" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/2.-Lead-Vocal-comp-example-Light-On-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="2.-Lead-Vocal-comp-example---Light-On-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/2.-Lead-Vocal-comp-example-Light-On-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/2.-Lead-Vocal-comp-example-Light-On-pichi-800x415.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/2.-Lead-Vocal-comp-example-Light-On-pichi-768x398.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/2.-Lead-Vocal-comp-example-Light-On-pichi-600x311.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
<div class="aio-icon-component    style_1"><div id="Info-box-wrap-1166" class="aio-icon-box default-icon" style=""  ><div class="aio-icon-default"><div class="ult-just-icon-wrapper  "><div class="align-icon" style="text-align:center;">
<div class="aio-icon none "  style="color:#333;font-size:24px;display:inline-block;">
	<i class="icomoon-arrow-right-fill"></i>
</div></div></div></div><div class="aio-icon-header" ><h4 class="aio-icon-title ult-responsive"  data-ultimate-target='#Info-box-wrap-1166 .aio-icon-title'  data-responsive-json-new='{"font-size":"","line-height":""}'  style="">LEAD VOCAL BEFORE COMMITTING</h4></div> <!-- header --><div class="aio-icon-description ult-responsive"  data-ultimate-target='#Info-box-wrap-1166 .aio-icon-description'  data-responsive-json-new='{"font-size":"","line-height":""}'  style=""></p>
<p class="p1">A combination of many different takes that have been comped together, faded, breath edited and volume levelled manually. Once these edits are done, I use Celemony’s Melodyne Studio to pitch edit each phrase to taste. One thing that’s made that a lot easier for big projects like this is Pro Tools’ new ARA integration, which allows you to access Melodyne right inside Pro Tools main window, making quick edits and aligning harmonies much easier. This was a huge time saver for Positive Spin – it meant I could swap out takes and make edits without having to re-import everything into Melodyne!</p>
<p></div> <!-- description --></div> <!-- aio-icon-box --></div> <!-- aio-icon-component --></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_empty_space"   style="height: 24px"><span class="vc_empty_space_inner"></span></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p1"><strong>THE ROOM</strong></h4>
<p class="p1">We were recording Gretta’s vocals in my production suite. It’s a space that Gretta is familiar with and comfortable in. It was originally designed as a drum room, so there’s acoustic treatment in there but not to the degree that you’d treat a vocal booth.</p>
<p class="p1">We moved the microphone around and tried different spots to find a position that didn’t accentuate the bad stuff. Once we determined where that was, we taped an ‘X’ on the floor and didn’t move from that position. Just like most people recording from home, you could hear a little bit of flutter and a little bit of background noise. In our case, we also had people walking past the studio door occasionally.</p>
<h4 class="p1"><strong>THE GEAR</strong></h4>
<p class="p1">For consistency’s sake we wanted to settle on one microphone for all the vocal recordings. We didn’t have a bunch of expensive mics to choose from. We settled on a Chandler TG mic. It’s certainly not a budget mic but it’s not out-of-reach either. Gretta has an amazing voice and really good control. But the biggest challenge we found was the 2 to 5kHz range, which can be harsh. We were trying to find a mic that gave a tape-like roll-off in that frequency area.</p>
<p class="p1">We didn’t have racks of preamps to choose from. We recorded the vocals through a pretty standard UA Apollo chain — nothing out of the ordinary. We didn’t track any plug-ins on the way in. We thought about recording with some gentle compression but I think we decided recording raw would give us more options later.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft vc_custom_1701059632524 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="766" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/3.-Lead-vocal-Melodyne-example-Dear-Seventeen-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="3.-Lead-vocal-Melodyne-example---Dear-Seventeen-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/3.-Lead-vocal-Melodyne-example-Dear-Seventeen-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/3.-Lead-vocal-Melodyne-example-Dear-Seventeen-pichi-800x426.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/3.-Lead-vocal-Melodyne-example-Dear-Seventeen-pichi-768x409.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/3.-Lead-vocal-Melodyne-example-Dear-Seventeen-pichi-600x319.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
<div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="aio-icon-component    style_1"><div id="Info-box-wrap-4285" class="aio-icon-box default-icon" style=""  ><div class="aio-icon-default"><div class="ult-just-icon-wrapper  "><div class="align-icon" style="text-align:center;">
<div class="aio-icon none "  style="color:#333;font-size:24px;display:inline-block;">
	<i class="icomoon-arrow-right-fill"></i>
</div></div></div></div><div class="aio-icon-header" ><h4 class="aio-icon-title ult-responsive"  data-ultimate-target='#Info-box-wrap-4285 .aio-icon-title'  data-responsive-json-new='{"font-size":"","line-height":""}'  style="">PRO TOOLS ARA INTEGRATION</h4></div> <!-- header --><div class="aio-icon-description ult-responsive"  data-ultimate-target='#Info-box-wrap-4285 .aio-icon-description'  data-responsive-json-new='{"font-size":"","line-height":""}'  style=""></p>
<p class="p1">Here’s the final tuning of the lead vocal for ‘Dear Seventeen’. I do this manually to make things feel as natural as possible (especially for leads), I also try to avoid using the macros, instead tuning specific phrases as required. Especially with an amazing singer like Gretta who’s grown up listening to tuned pop music, it’s remarkable how ‘AutoTuned’ her natural voice often sounds. Instead, I focus on fixing any small moments or nudging phrases that detract from the performance. For BVs, these are usually more felt than heard, so it’s important they closely match the lead in this type of pop music. I will generally tune these more aggressively, using Melodyne’s overlapping track feature to make sure notes and note transitions are matching between leads and doubles or harmonies.</p>
<p></div> <!-- description --></div> <!-- aio-icon-box --></div> <!-- aio-icon-component --></div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight vc_custom_1701059646807 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="766" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/4.-Multiple-vocal-arrangement-Melodyne-Studio-Dear-Seventeen-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="4.-Multiple-vocal-arrangement---Melodyne-Studio---Dear-Seventeen-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/4.-Multiple-vocal-arrangement-Melodyne-Studio-Dear-Seventeen-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/4.-Multiple-vocal-arrangement-Melodyne-Studio-Dear-Seventeen-pichi-800x426.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/4.-Multiple-vocal-arrangement-Melodyne-Studio-Dear-Seventeen-pichi-768x409.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/4.-Multiple-vocal-arrangement-Melodyne-Studio-Dear-Seventeen-pichi-600x319.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
<div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="aio-icon-component    style_1"><div id="Info-box-wrap-1408" class="aio-icon-box default-icon" style=""  ><div class="aio-icon-default"><div class="ult-just-icon-wrapper  "><div class="align-icon" style="text-align:center;">
<div class="aio-icon none "  style="color:#333;font-size:24px;display:inline-block;">
	<i class="icomoon-arrow-right-fill"></i>
</div></div></div></div><div class="aio-icon-header" ><h4 class="aio-icon-title ult-responsive"  data-ultimate-target='#Info-box-wrap-1408 .aio-icon-title'  data-responsive-json-new='{"font-size":"","line-height":""}'  style="">MELODYNING BVs</h4></div> <!-- header --><div class="aio-icon-description ult-responsive"  data-ultimate-target='#Info-box-wrap-1408 .aio-icon-description'  data-responsive-json-new='{"font-size":"","line-height":""}'  style=""></p>
<p class="p1">Gretta’s harmony arrangements are quite intricate! But using this mode, you can quickly cross reference two tracks within Melodyne to check timing and pitch alignment. It’s very, very useful! If you hold Option on a Mac and drag any bubble, you can move the pitch up or down by small increments, for more subtle adjustments. For something more natural, aim for within 20 cents either side of the pitch centre. For super-pop stuff, I’d aim for within 10 cents. Most of the ‘tuned’ sound in Melodyne comes from the Pitch Modulation tool. At 100%, Melodyne isn’t affecting the natural pitch waver of the vocal, even if you’ve changed the note centre; for a super-pop sound, you can take it as low as 50-60%! For most things, I wouldn’t take it below 85% unless you want it to sound intentional. Another tip is to change the onset and offramp of notes by moving the cursor to the lines in between notes – this makes a huge difference to how natural your edits will sound. Finally, make sure you set some key commands to make your Melodyne experience easier! I personally use the number keys above the main keyboard for ease of access. I set 1 as the note selection tool (the normal mouse pointer), 2 as the pitch tool, 3 as the note separation tool, 4 as timing and 5 as amplitude. Once you get the muscle memory going, you’ll absolutely fly through edits!</p>
<p></div> <!-- description --></div> <!-- aio-icon-box --></div> <!-- aio-icon-component --></div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p1"><strong>THE PSYCHOLOGY</strong></h4>
<p class="p1">As a vocal producer it’s your job to coax the very best performance from the artist, and a large part of that is knowing when to push and when to back off. It’s the interpersonal stuff.</p>
<p class="p1"><cite><strong style="background: #ebbeb9; color: #000000;">These days studio engineers are expected to fix performances after the fact. Artists tend to have short attention spans now. They expect things to happen quickly and, often, budgets are tight and time is short.</strong></cite> Your job is to make the artist feel comfortable and make sure they can perform at their best.</p>
<p class="p1">Gretta needed a safe space to be emotionally vulnerable, but she’s also very hard on herself and needed support at points (even when she delivers a near-perfect performance!).</p>
<h4 class="p1"><strong>MIX ON THE RUN</strong></h4>
<p class="p1">Gretta is definitely someone who needs to hear a high-quality rough mix as she goes. I would create a good-sounding mix with some effects, processing, and some panning of backing vocals, so Gretta could get involved with the song as we listen back during the recording process. None of what I was doing would be used in the final mix, it was just for the purposes of listening. That’s another piece of advice I’d offer: artists these days won’t understand why they’re not hearing something ‘produced’ during the recording process, in fact, if they don’t, it could badly demoralise them.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft vc_custom_1701059709392 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="747" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/8.-Effect-preset-example-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="8.-Effect-preset-example-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/8.-Effect-preset-example-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/8.-Effect-preset-example-pichi-800x415.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/8.-Effect-preset-example-pichi-768x398.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/8.-Effect-preset-example-pichi-600x311.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
<div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="aio-icon-component    style_1"><div id="Info-box-wrap-2255" class="aio-icon-box default-icon" style=""  ><div class="aio-icon-default"><div class="ult-just-icon-wrapper  "><div class="align-icon" style="text-align:center;">
<div class="aio-icon none "  style="color:#333;font-size:24px;display:inline-block;">
	<i class="icomoon-arrow-right-fill"></i>
</div></div></div></div><div class="aio-icon-header" ><h4 class="aio-icon-title ult-responsive"  data-ultimate-target='#Info-box-wrap-2255 .aio-icon-title'  data-responsive-json-new='{"font-size":"","line-height":""}'  style="">VOCAL TEMPLATE FX</h4></div> <!-- header --><div class="aio-icon-description ult-responsive"  data-ultimate-target='#Info-box-wrap-2255 .aio-icon-description'  data-responsive-json-new='{"font-size":"","line-height":""}'  style=""></p>
<p class="p1">This massively speeds up the process of making things sound vibey or useful. It’s important to make an artist feel comfortable, and having some effects that you can quickly draw upon to help with that is essential.</p>
<p></div> <!-- description --></div> <!-- aio-icon-box --></div> <!-- aio-icon-component --></div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight vc_custom_1701059726472 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="766" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/7.-BV-track-preset-example-Light-On-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="7.-BV-track-preset-example---Light-On-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/7.-BV-track-preset-example-Light-On-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/7.-BV-track-preset-example-Light-On-pichi-800x426.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/7.-BV-track-preset-example-Light-On-pichi-768x409.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/7.-BV-track-preset-example-Light-On-pichi-600x319.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
<div class="aio-icon-component    style_1"><div id="Info-box-wrap-3439" class="aio-icon-box default-icon" style=""  ><div class="aio-icon-default"><div class="ult-just-icon-wrapper  "><div class="align-icon" style="text-align:center;">
<div class="aio-icon none "  style="color:#333;font-size:24px;display:inline-block;">
	<i class="icomoon-arrow-right-fill"></i>
</div></div></div></div><div class="aio-icon-header" ><h4 class="aio-icon-title ult-responsive"  data-ultimate-target='#Info-box-wrap-3439 .aio-icon-title'  data-responsive-json-new='{"font-size":"","line-height":""}'  style="">LISTENING BACK TO TAKES</h4></div> <!-- header --><div class="aio-icon-description ult-responsive"  data-ultimate-target='#Info-box-wrap-3439 .aio-icon-description'  data-responsive-json-new='{"font-size":"","line-height":""}'  style=""></p>
<p class="p1">The artist needs to hear something that puts the vocal in the world of the song. In this case, a simple RX Mouth De-Click module (which helps deal with small pops, clicks and mouth noises), Pro-Q3 (which is predominantly dealing with low and mid-range build up in stacked harmonies), Auto-Tune Pro X for a quick tuning job (this is generally replaced later with the much more surgical Melodyne, but can help to make things sound ‘pop’ quickly and sounds much better than most of the competition) and RVox for some very basic compression and levelling.</p>
<p></div> <!-- description --></div> <!-- aio-icon-box --></div> <!-- aio-icon-component --></div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft vc_custom_1701059745333 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="774" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/9.-Dear-Seventeen-final-vocal-prod-session-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="9.-Dear-Seventeen-final-vocal-prod-session-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/9.-Dear-Seventeen-final-vocal-prod-session-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/9.-Dear-Seventeen-final-vocal-prod-session-pichi-800x430.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/9.-Dear-Seventeen-final-vocal-prod-session-pichi-768x413.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/9.-Dear-Seventeen-final-vocal-prod-session-pichi-600x323.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p1"><strong>MELODYNE FORENSICS</strong></h4>
<p class="p1">After about a day of recording vocals for a song I would spend about a day, or a day and a half, on Melodyning and editing everything. For this album, Gretta was after a processed pop sound. So everything is very, very closely tuned and edited. Most of the harmonies have at least four stacks – maybe a couple of BVs in the left and right, with the lead in the middle. There’s an amazing level of detail, so I’m doing my best to be just as detailed with breath editing and de-clicking – making sure the fades were all exactly right. I’ll also be forensic with the time editing, making sure all of the BVs and the lead move really closely together. If every BV double was slightly out of time relative to each other then we wouldn’t achieve that super-tight pop sound.<span class="Apple-converted-space">  </span>You can read more about the pitch and timing edit process later in the article.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1595296124081 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner vc_custom_1595990674300"><div class="wpb_wrapper"><div id="bsa-block-970--450" class="bsaProContainerNew bsaProContainer-86 bsa-block-970--450 bsa-pro-col-1" style="display: block !important"><div class="bsaProItems bsaGridNoGutter " style="background-color:"><div class="bsaProItem bsaReset" data-animation="fadeIn" style=""><div class="bsaProItemInner" style="background-color:"><div class="bsaProItemInner__thumb"><div class="bsaProAnimateThumb" style="display: block;margin: auto;"><a class="bsaProItem__url" href="https://www.audiotechnology.com/advertise?sid=86&bsa_pro_id=749&bsa_pro_url=1" target="_blank"><div class="bsaProItemInner__img" style="background-image: url(&#39;https://www.audiotechnology.com/wp-content/uploads/bsa-pro-upload/1673239034-Korg Nautilus_PA-min.gif&#39;)"></div></a></div></div></div></div></div></div><script>
			(function($){
				function bsaProResize() {
					var sid = "86";
					var object = $(".bsaProContainer-" + sid);
					var imageThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__img");
					var animateThumb = $(".bsaProContainer-" + sid + " .bsaProAnimateThumb");
					var innerThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__thumb");
					var parentWidth = "970";
					var parentHeight = "450";
					var objectWidth = object.parent().outerWidth();
//					var objectWidth = object.width();
					if ( objectWidth <= parentWidth ) {
						var scale = objectWidth / parentWidth;
						if ( objectWidth > 0 && objectWidth !== 100 && scale > 0 ) {
							animateThumb.height(parentHeight * scale);
							innerThumb.height(parentHeight * scale);
							imageThumb.height(parentHeight * scale);
							object.height(parentHeight * scale);
						} else {
							animateThumb.height(parentHeight);
							innerThumb.height(parentHeight);
							imageThumb.height(parentHeight);
							object.height(parentHeight);
						}
					} else {
						animateThumb.height(parentHeight);
						innerThumb.height(parentHeight);
						imageThumb.height(parentHeight);
						object.height(parentHeight);
					}
				}
				$(document).ready(function(){
					bsaProResize();
					$(window).resize(function(){
						bsaProResize();
					});
				});
			})(jQuery);
		</script>						<script>
							(function ($) {
								var bsaProContainer = $('.bsaProContainer-86');
								var number_show_ads = "0";
								var number_hide_ads = "0";
								if ( number_show_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeIn(); }, number_show_ads * 1000);
								}
								if ( number_hide_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeOut(); }, number_hide_ads * 1000);
								}
							})(jQuery);
						</script>
						</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p1"><strong>VOCAL CHAIN</strong></h4>
<p class="p1">The S’s on Gretta’s vocal were the biggest challenge. We were making quite a bright-sounding record and a very loud pop record, so we would manually de-ess in addition to using de-esser plugins.</p>
<p class="p1">As far as plugins go, at the start of the chain was a more forensic EQ (like a FabFilter Pro-Q3) then a couple of stages of gentle compression – we generally used a UA LA2A and a 1176. Gretta’s vocal doesn’t respond to one-stage of more severe compression, so we’re better off having a couple of stages.</p>
<p class="p1">I’m a big fan of the UA Pultec EQ-P1A, for a gentle presence boost or to cut some low/mid. It’s the sort of EQ plugin that forces you to use your ears and not your eyes, which I like.</p>
<p class="p1">At all times I’m aiming to finish with a vocal quality that sounds as close to the other songs as possible.</p>
<p class="p1"><cite><strong style="background: #ebbeb9; color: #000000;">Consistency is everything for this album. The goal was not to have a really organic vocal performance, but a consistent vocal that would consistently would cut through at all times.</strong></cite></p>
<p class="p1">Finally, we<span class="Apple-converted-space">  </span>would often use the McDSP ML4 compressor/limiter over the vocal group. When you hear a low/mid build up or the highs build up, the ML4 lets you control that sort of thing very effectively. It’s a really useful tool at the end of vocal chain tool to just control stuff when you have a lot of vocal elements coming together.</p>
<p class="p1">For backing vocals, the other producer on the album, Gab Strum, is a big fan of the Eventide H3000. He’s got a hardware unit and loves it. I tend to use the SoundToys Microshift plugin or the plugin version of the H3000. Those widener plugins are great on BV stacks to make them more expansive in the mix. For the ‘oohs’ and ‘aahhs’ those type of plugins are unstoppable. I’m a fan of EchoBoy for delays as well.</p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-4097" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-4097 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >artists these days won’t understand why they’re not hearing something ‘produced’ during the recording process, in fact, if they don’t, it could badly demoralise them.</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-8470" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-8470 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft vc_custom_1701059822473 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="766" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/10.-Gretta-LV-chain-1-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="10.-Gretta-LV-chain-1-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/10.-Gretta-LV-chain-1-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/10.-Gretta-LV-chain-1-pichi-800x426.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/10.-Gretta-LV-chain-1-pichi-768x409.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/10.-Gretta-LV-chain-1-pichi-600x319.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="766" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/11.-Gretta-LV-chain-2-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="11.-Gretta-LV-chain-2-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/11.-Gretta-LV-chain-2-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/11.-Gretta-LV-chain-2-pichi-800x426.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/11.-Gretta-LV-chain-2-pichi-768x409.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/11.-Gretta-LV-chain-2-pichi-600x319.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1024" height="490" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/12.-Gretta-LV-chain-3-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="12.-Gretta-LV-chain-3-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/12.-Gretta-LV-chain-3-pichi.jpg 1024w, https://www.audiotechnology.com/wp-content/uploads/2023/11/12.-Gretta-LV-chain-3-pichi-800x383.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/12.-Gretta-LV-chain-3-pichi-768x368.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/12.-Gretta-LV-chain-3-pichi-600x287.jpg 600w" sizes="(max-width: 1024px) 100vw, 1024px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="aio-icon-component    style_1"><div id="Info-box-wrap-7260" class="aio-icon-box default-icon" style=""  ><div class="aio-icon-default"><div class="ult-just-icon-wrapper  "><div class="align-icon" style="text-align:center;">
<div class="aio-icon none "  style="color:#333;font-size:24px;display:inline-block;">
	<i class="icomoon-arrow-right-fill"></i>
</div></div></div></div><div class="aio-icon-header" ><h4 class="aio-icon-title ult-responsive"  data-ultimate-target='#Info-box-wrap-7260 .aio-icon-title'  data-responsive-json-new='{"font-size":"","line-height":""}'  style="">VOCAL CHAIN</h4></div> <!-- header --><div class="aio-icon-description ult-responsive"  data-ultimate-target='#Info-box-wrap-7260 .aio-icon-description'  data-responsive-json-new='{"font-size":"","line-height":""}'  style="">The chain starts with RX Mouth De-Click for dealing with mouth noises. One significant issue with my room and mic combo was the build up of mid frequencies, so Pro-Q3 is dealing with that and some harshness. Both the LA-2A and 1176 compressors are working quite gently, aiming for no more than 2dB-ish of gain reduction. The UAD Pultec EQ is one of my favourite plugins on everything. Here, on its broadest setting, it’s dealing with some low/mids (which was very challenging) and the classic boost and attenuate trick at 8k. A second Pro-Q was dealing with some more specific problem frequencies, then RX De-Ess is dealing with some esses – these were manually edited too, so it’s not working very hard at all (1-2dB at the most). Finally, for a more ‘pop’ sound, Sonible’s SmartComp2 is doing a little more final compression of the whole thing.</div> <!-- description --></div> <!-- aio-icon-box --></div> <!-- aio-icon-component --></div></div></div></div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight vc_custom_1701059850623 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="774" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/13.-Vocal-playlisting-tip-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="13.-Vocal-playlisting-tip-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/13.-Vocal-playlisting-tip-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/13.-Vocal-playlisting-tip-pichi-800x430.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/13.-Vocal-playlisting-tip-pichi-768x413.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/13.-Vocal-playlisting-tip-pichi-600x323.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
<div class="aio-icon-component    style_1"><div id="Info-box-wrap-2643" class="aio-icon-box default-icon" style=""  ><div class="aio-icon-default"><div class="ult-just-icon-wrapper  "><div class="align-icon" style="text-align:center;">
<div class="aio-icon none "  style="color:#333;font-size:24px;display:inline-block;">
	<i class="icomoon-arrow-right-fill"></i>
</div></div></div></div><div class="aio-icon-header" ><h4 class="aio-icon-title ult-responsive"  data-ultimate-target='#Info-box-wrap-2643 .aio-icon-title'  data-responsive-json-new='{"font-size":"","line-height":""}'  style="">PLAYLISTING</h4></div> <!-- header --><div class="aio-icon-description ult-responsive"  data-ultimate-target='#Info-box-wrap-2643 .aio-icon-description'  data-responsive-json-new='{"font-size":"","line-height":""}'  style=""></p>
<p class="p1">I use Pro Tools’ playlisting feature very heavily on all vocals. One of the biggest tips I learnt from working as mix engineer Tristan Hoogland’s assistant is to save versions as you work – being able to backtrack is super important! I’ve adapted this technique for vocal editing. As you can see from the screen grab, I’ll record takes of vocals on new playlists, as per normal. Once you pick a take, duplicate the playlist. I’ll use Pro Tools’ ARA Melodyne integration to tune this take. Once it’s tuned, I’ll duplicate the take, commit the tuning and add ‘TUNE’ to the end of the playlist name. I’ll then duplicate this playlist, and add ‘ALIGN’ to the end of the name – this track will be the final layer, where breath and timing editing occurs. This allows me to backtrack. If I notice some tuning error, or the artist wants to use a different take later in the process, it’s easy to go back and fix anything, as I’ve got each stage of the process on a separate playlist.</p>
<p></div> <!-- description --></div> <!-- aio-icon-box --></div> <!-- aio-icon-component --></div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft vc_custom_1701059865837 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_row-o-content-bottom vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="774" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/14.-BV-timing-editing-tips-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="14.-BV-timing-editing-tips-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/14.-BV-timing-editing-tips-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/14.-BV-timing-editing-tips-pichi-800x430.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/14.-BV-timing-editing-tips-pichi-768x413.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/14.-BV-timing-editing-tips-pichi-600x323.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="962" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/15.-VocAlign-Ultra-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="15.-VocAlign-Ultra-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/15.-VocAlign-Ultra-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/15.-VocAlign-Ultra-pichi-800x534.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/15.-VocAlign-Ultra-pichi-768x513.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/15.-VocAlign-Ultra-pichi-600x401.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="aio-icon-component    style_1"><div id="Info-box-wrap-1358" class="aio-icon-box default-icon" style=""  ><div class="aio-icon-default"><div class="ult-just-icon-wrapper  "><div class="align-icon" style="text-align:center;">
<div class="aio-icon none "  style="color:#333;font-size:24px;display:inline-block;">
	<i class="icomoon-arrow-right-fill"></i>
</div></div></div></div><div class="aio-icon-header" ><h4 class="aio-icon-title ult-responsive"  data-ultimate-target='#Info-box-wrap-1358 .aio-icon-title'  data-responsive-json-new='{"font-size":"","line-height":""}'  style="">TIMING EDITING</h4></div> <!-- header --><div class="aio-icon-description ult-responsive"  data-ultimate-target='#Info-box-wrap-1358 .aio-icon-description'  data-responsive-json-new='{"font-size":"","line-height":""}'  style=""></p>
<p class="p1">For this final stage, I’ll do a combination of manual timing editing (often using Pro Tools’ Nudge feature and lots of fades) and automatic timing editing using VocAlign Ultra. VocAlign is great and can save you a lot of time, but you simply must check its work manually – often it gets 80 percent of a phrase perfect but makes a monumental mess of one or two lines. For this reason, I always recommend working phrase-by-phrase with VocAlign rather than batch processing an entire track!</p>
<p class="p1">I can’t overstate how important this step is to making everything feel slick and professional. The biggest difference between a pro-sounding production and something rougher is usually timing editing. Taking the time and care to go through your tracks and make sure BVs are in time with the lead makes a truly huge difference. I always like to reference to Ian Kirkpatrick’s work (Dua Lipa, Troye Sivan). The vocal editing is always phenomenal and makes such a difference in how the track hits you as a listener.</p>
<p class="p1">For this record, as we were recording in an imperfect room, I heavily edited breaths and gaps on most BV tracks. Trying to remove any additional noise is super helpful when going for a slick pop sound! You can see this in the screen grab – there are careful fades and breath edits that have been manually time aligned to the lead.</p>
<p class="p1">The other screenshot is of VocAlign Ultra. This is used in PT’s AudioSuite mode, where you can highlight a lead then a double or harmony and it will attempt to closely match their timing. It works pretty well most of the time!</p>
<p></div> <!-- description --></div> <!-- aio-icon-box --></div> <!-- aio-icon-component --></div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight vc_custom_1701059885931 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="774" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/16.-Dear-Seventeen-final-production-session-for-mix-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="16.-Dear-Seventeen-final-production-session-for-mix-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/16.-Dear-Seventeen-final-production-session-for-mix-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/16.-Dear-Seventeen-final-production-session-for-mix-pichi-800x430.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/16.-Dear-Seventeen-final-production-session-for-mix-pichi-768x413.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/16.-Dear-Seventeen-final-production-session-for-mix-pichi-600x323.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
<div class="aio-icon-component    style_1"><div id="Info-box-wrap-8549" class="aio-icon-box default-icon" style=""  ><div class="aio-icon-default"><div class="ult-just-icon-wrapper  "><div class="align-icon" style="text-align:center;">
<div class="aio-icon none "  style="color:#333;font-size:24px;display:inline-block;">
	<i class="icomoon-arrow-right-fill"></i>
</div></div></div></div><div class="aio-icon-header" ><h4 class="aio-icon-title ult-responsive"  data-ultimate-target='#Info-box-wrap-8549 .aio-icon-title'  data-responsive-json-new='{"font-size":"","line-height":""}'  style="">FINAL SESSION</h4></div> <!-- header --><div class="aio-icon-description ult-responsive"  data-ultimate-target='#Info-box-wrap-8549 .aio-icon-description'  data-responsive-json-new='{"font-size":"","line-height":""}'  style=""></p>
<p class="p1">Here’s the final production session for ‘Dear Seventeen’, the track I produced on Positive Spin. Vocals are in yellow and pink down the bottom half of the session – as these were printed with processing from the vocal production session, there’s minimal processing in this one. Some additional Pro-Q instances are helping with any specific clashes with the instrumentation, and there’s some additional effects – a combination of Altiverb, UAD Galaxy Tape and the SoundToys Little Plate that I’ve committed to audio for the mix engineer, Tristan Hoogland. As you can see with the BVs, I’ve condensed them even further down. Now that they’ve been tightly edited and I’m happy with the tuning and processing, I’ll consolidate them into as few tracks as possible.</p>
<p></div> <!-- description --></div> <!-- aio-icon-box --></div> <!-- aio-icon-component --></div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft vc_custom_1701059902317 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1440" height="705" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/17.-Final-vocal-sweet-sauce-in-mix-session-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="17.-Final-vocal-sweet-sauce-in-mix-session-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/17.-Final-vocal-sweet-sauce-in-mix-session-pichi.jpg 1440w, https://www.audiotechnology.com/wp-content/uploads/2023/11/17.-Final-vocal-sweet-sauce-in-mix-session-pichi-800x392.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/17.-Final-vocal-sweet-sauce-in-mix-session-pichi-768x376.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/17.-Final-vocal-sweet-sauce-in-mix-session-pichi-600x294.jpg 600w" sizes="(max-width: 1440px) 100vw, 1440px" /></div>
		</figure>
	</div>
<div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="aio-icon-component    style_1"><div id="Info-box-wrap-1861" class="aio-icon-box default-icon" style=""  ><div class="aio-icon-default"><div class="ult-just-icon-wrapper  "><div class="align-icon" style="text-align:center;">
<div class="aio-icon none "  style="color:#333;font-size:24px;display:inline-block;">
	<i class="icomoon-arrow-right-fill"></i>
</div></div></div></div><div class="aio-icon-header" ><h4 class="aio-icon-title ult-responsive"  data-ultimate-target='#Info-box-wrap-1861 .aio-icon-title'  data-responsive-json-new='{"font-size":"","line-height":""}'  style="">FAIRY DUST</h4></div> <!-- header --><div class="aio-icon-description ult-responsive"  data-ultimate-target='#Info-box-wrap-1861 .aio-icon-description'  data-responsive-json-new='{"font-size":"","line-height":""}'  style=""></p>
<p class="p1">A final little touch of de-essing, Goodhertz’s Faraday Limiter as a vocal leveller and UAD’s API strip taking off a very small amount of high presence before the mix bus!</p>
<p></div> <!-- description --></div> <!-- aio-icon-box --></div> <!-- aio-icon-component --></div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p1"><strong>MELODYNE MAESTRO</strong></h4>
<p class="p1">Here’s a final thought: you don’t need crazy-expensive tools to do a good job! Close editing and a basic copy of Melodyne Essential (A$129) will get you 90 percent of the results the pro engineers are achieving. The other tools I’ve mentioned make life just a little bit easier but you’ll get a similar result with your DAW’s bundled plugins.</p>
<p class="p1"><cite><strong style="background: #ebbeb9; color: #000000;">Remember: great-sounding vocals aren’t something you can rush.</strong></cite> You need to take your time to make the artist feel comfortable; take your time listening and doing the best job that you can! More often than not, the vocals are the focus, it’s where the humanity and emotion of the song will mostly come through. A good lead vocal comp will show the artist in the best light and make them feel great, and this just takes time and care. Once you’ve got a great lead vocal, making sure the BVs support the lead and help tell that story is the goal. Time, care and Melodyne will get you there!</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div>
</section><p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/producing-gretta-rays-vocals">Producing Gretta Ray’s Vocals</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.audiotechnology.com/tutorials/producing-gretta-rays-vocals/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Mixing With Headphones 3</title>
		<link>https://www.audiotechnology.com/tutorials/mixing-with-headphones-3</link>
					<comments>https://www.audiotechnology.com/tutorials/mixing-with-headphones-3#comments</comments>
		
		<dc:creator><![CDATA[Greg Simmons]]></dc:creator>
		<pubDate>Sun, 29 Oct 2023 22:23:52 +0000</pubDate>
				<category><![CDATA[Issue 91]]></category>
		<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[greg simmons]]></category>
		<category><![CDATA[issue]]></category>
		<category><![CDATA[Mixing With Headphones]]></category>
		<category><![CDATA[Part 3]]></category>
		<category><![CDATA[tutorial]]></category>
		<guid isPermaLink="false">https://www.audiotechnology.com/?p=77130</guid>

					<description><![CDATA[<p> [...]</p>
<p><a class="btn btn-secondary understrap-read-more-link" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">Read More...</a></p>
<p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">Mixing With Headphones 3</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element  drop-cap" >
		<div class="wpb_wrapper">
			<p><span class="s1">In the previous instalment of this six-part series we looked at the fundamental differences between mixing with headphones and mixing with speakers. We saw that speaker monitoring introduces a lot of variables to our mix because what we are hearing from the speakers is <i>not</i> what is coming out of the mixing console or DAW. The frequency response and distortion of our speakers has been embedded into it; the acoustics of our listening room have been superimposed upon it; and there might be comb-filtering issues due to reflections off nearby surfaces that have influenced our tonal decisions. Compensating for those variables during the course of a mixing session adds resilience to our speaker mixes and thereby improves how well they translate to other listening environments.</span></p>
<p class="p3"><span class="s1">None of those variables occur when monitoring with headphones, and therefore our headphone mixes don’t get the same resilience built into them – meaning they don’t translate to numerous playback situations as well as speaker mixes do. There are, however, a number of tools and hacks we can use to reveal and/or emulate those variables and compensate for them.</span></p>
<h4 class="p3"><span class="s1"><b>HEADPHONE MIXING TOOLS &amp; HACKS</b></span></h4>
<p class="p3"><span class="s1">In every discussion about audio metering devices and similar tools there’s always someone offering the seemingly well-intentioned advice of “just trust your ears”. Such platitudinal </span><span class="s2">nonsense</span><span class="s1">, comforting though it might be, always needs to be taken in context <i>before</i> being summarily dismissed with the same “you don’t need all of that stuff” gusto that accompanied it. Why?</span></p>
<p class="p3"><span class="s1">It usually comes from experienced people who have already made enough expensive and/or regrettable mistakes to know what to listen for, <i>and</i> who are working in situations that provide enough information to allow informed decision-making (ie. working in acoustically-designed control rooms fitted with big monitor speakers). They have also been receiving years of feedback from downstream mastering engineers and others, which has further refined their mixing skills. In other words, they have the right combination of equipment, experience and listening skills that allows them to trust what their ears are telling them.</span></p>
<p class="p3"><span class="s1">The same ‘feel good’ advice is often parroted by novices, wannabes and wish-casters who embraced it earlier and are diligently waiting for it to ‘kick in’ and prove true – until then, their ‘trust your ears’ mixes are deteriorating while their mastering engineer’s income is improving.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">How wonderful would it be if the entirety of audio engineering could be summed up with “just trust your ears”? There would be no need for all of the eye-glazing maths and physics; no need for the many thousands of words and illustrations in audio engineering textbooks; no need for audio courses; no need for acousticians; and no need for sound engineers. Audio engineering would be as intuitive as walking – it only gets hard if you think about how you are doing it.</span></p>
<p class="p3"><span class="s1">The people asking the questions that trigger the &#8216;just trust your ears&#8217; response don’t have the required combination of equipment, experience and listening skills to be <i>able</i> to trust what their ears are telling them – which is why they are asking such a question in the first place. Telling them to &#8216;just trust their ears&#8217; is misleading at best, and flexing at worst – especially if it is given in reference to mixing with headphones. No matter how much expertise the people offering such advice might have, they obviously don’t have the common sense required to properly contextualise the question and either provide a <em>meaningful</em> answer or STFU</span><span class="s1">. As has been repeated many times by numerous leading figures throughout history, “if your words are not better than silence, then be silent”.</span></p>
<p class="p3"><span class="s1">We’ve already established that a number of variables are missing in headphone monitoring that exist in speaker monitoring. This means we cannot simply &#8216;trust our ears&#8217; when mixing in headphones because our ears are not getting enough information to make reliable decisions. We can, however, benefit from tools that allow us to <i>see on a screen</i> what we <i>don’t hear in headphones</i> and thereby provide us with meaningful visual guidelines. Staying within those visual guidelines allows us to trust our ears for everything else, and hopefully make headphone mixes that translate well across <i>all</i> playback systems in the same way that a good speaker mix does.</span></p>
<p class="p3"><span class="s1">What do we need? Read on…</span></p>
<h4 class="p3"><span class="s1"><b>HEADPHONES &amp; FREQUENCY RESPONSE</b></span></h4>
<p class="p3"><span class="s1">The requirement for good headphones goes without saying, of course, for all of the frequency response and room acoustics reasons outlined in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">previous instalment</a></strong></span>. A pair of contemporary headphones, voiced to the Harman curve or similar, should take care of the frequency response aspects of the translation problem and prevent any significant tonal surprises when a mix made on headphones is heard through speakers.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-7573" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-7573 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >Audio engineering would be as intuitive as walking…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-4647" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-4647 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="875" height="534" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/01-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="01-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/01-pichi.jpg 875w, https://www.audiotechnology.com/wp-content/uploads/2023/10/01-pichi-800x488.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/01-pichi-768x469.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/01-pichi-600x366.jpg 600w" sizes="(max-width: 875px) 100vw, 875px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">There are many headphones on the market that are suitable for mixing. As a generalisation, open-back headphones provide higher fidelity, especially at low frequencies, but closed-back headphones have the advantage when isolation is required.</span></p>
<p class="p3"><span class="s1">Headphones with active noise-cancellation are not recommended for mixing, and neither are wireless headphones. Active noise-cancelling headphones use polarity inversion and equalisation to reduce (ie. cancel) the audibility of background sounds (ie. noise). Wireless headphones use data compression algorithms to reduce the signal’s bitrate so it can be transmitted wirelessly without drop-outs and buffering issues. Although each technology provides an enjoyable <em>listening</em> experience, neither can be trusted for <em>mixing</em>.</span></p>
<p class="p3"><span class="s1">If you plan on mixing through the headphone socket of a laptop or similar portable device – rather than using an audio interface or a dedicated headphone amplifier – you’re going to need headphones with <i>high sensitivity</i> and <i>low impedance</i>. Why? Because they’re easier to drive to useful SPLs from the low voltage amplifiers found in battery-powered equipment such as mobile devices. To understand why, scroll down to ‘Impedance, Power, Sensitivity &amp; SPL’.</span></p>
<h4 class="p3"><span class="s1"><b>6dB GUIDE &amp; FREQUENCY BALANCE</b></span></h4>
<p class="p3"><span class="s1">Despite the voicing methods described in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">previous instalment</a></strong></span> that aim to reduce tonal discrepancies between headphones and speakers, when mixing on headphones it is still easy to get sidetracked towards making a mix that is too bright or too dull – especially if the first sounds introduced to the mix are too bright or too dull and the rest of the mix is built around them. How do we keep ourselves on track? This is where the 6dB guide can be helpful…</span></p>
<p class="p3"><span class="s1">Many EQ plug-ins offer a spectrum analyser, and <i>some</i> of those spectrum analysers offer a ‘6dB guide’. This appears as a diagonal line beginning at 0dB at 1kHz and descending downwards at a rate of -6dB/octave as the frequency gets higher.</span></p>
<p class="p3"><span class="s1">If we listen to a number of well-engineered recordings while studying how their frequency spectrums compare to the 6dB guide, we’ll notice an interesting trend. Mixes that <i>sound like</i> they have a good balance of energy throughout the frequency spectrum tend to conform to the 6dB guide, as do direct-to-stereo purist recordings of acoustic music that are made with ‘accurate’ microphones (ie. those with a flat frequency response) and that are often described as sounding ‘natural’ or ‘pure’. Meanwhile, mixes that sound excessively bright will rise noticeably above the 6dB guide line, and mixes that sound excessively dull with fall noticeably below the 6dB guide line.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="866" height="549" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/02-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="02-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/02-pichi.jpg 866w, https://www.audiotechnology.com/wp-content/uploads/2023/10/02-pichi-800x507.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/02-pichi-768x487.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/02-pichi-600x380.jpg 600w" sizes="(max-width: 866px) 100vw, 866px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">Conforming to the 6dB guide does not guarantee that a mix has a good frequency balance, but it does offer a good point of reference – especially if used in conjunction with your musical reference track (see below).</span></p>
<h4 class="p3"><span class="s1"><b>Frequency Response Tools You Don’t Need</b></span></h4>
<p class="p3"><span class="s1">There are currently a number of devices and apps on the market that use DSP to ‘correct’ the frequency response and other sonic characteristics of numerous headphones. The idea is simple: enter the make and model of the headphones into the app and – assuming the manufacturer has already created a profile for those headphones – a compensating process will be inserted into the listening path to make the headphones sound ‘right’, or perhaps even make them sound like more expensive headphones.</span></p>
<p class="p3"><span class="s1">At best, such listening tools are just one more thing in the monitoring path affecting our decision making. The idea of monitoring equalisation and DSP correction has validity in the sound reinforcement world and also <em>debatably</em> in the recording studio world, which are both cases where room acoustics issues can be compensated for. As we’ve previously established, room acoustics problems don’t exist with headphones. I</span><span class="s1">t’s reasonable to assume that long-established professional headphone manufacturers like AKG, BeyerDynamic, Sennheiser (which also owns Neumann) et al know what they’re doing. Their contemporary headphones reflect decades of refinement as detailed in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">previous instalment</a></strong></span> of this series, and shouldn’t need any compensation.</span></p>
<p class="p3"><span class="s1">It’s also worth remembering the difference between <i>listening</i> with headphones and <i>mixing</i> with headphones. This important difference is often overlooked by engineers when choosing headphones. As with choosing studio monitors, it is not enough to simply listen to how well they reproduce music – the real test is how well they help us make good mixes. What they reveal about our mix decisions is more important than how much enjoyment they offer. Most of the DSP-based headphone correction tools are intended to provide an improved listening experience for audiophiles, <i>not</i> create a more revealing mixing environment. Any headphones that need equalisation to make them suitable for mixing are the wrong choice to begin with.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="690" height="590" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/03-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="03-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/03-pichi.jpg 690w, https://www.audiotechnology.com/wp-content/uploads/2023/10/03-pichi-600x513.jpg 600w" sizes="(max-width: 690px) 100vw, 690px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">An interesting tool that sits somewhere between the 6dB guide mentioned earlier and the ‘correction’ equalisation mentioned above is one that performs a spectral analysis of our mix, compares it to the typical frequency spectrum of reference mixes of the same genre, and advises which parts of the mix’s spectrum need more or less energy when compared to the spectrums of the references. This is a very useful tool for people mixing on affordable nearfield monitors that don’t reliably reproduce much below 80Hz and who are working in rooms without much acoustic design or treatment, and are therefore literally ‘flying blind’ when working with low frequencies. However, with good headphones and an appropriate reference track for the genre (as discussed in ‘Reference Tracks’, below) this type of tool shouldn’t be necessary because, from a spectral point of view, headphones voiced to the Harman target or similar provide a situation where we <i>can</i> trust what we’re hearing.</span></p>
<h4 class="p3"><span class="s1"><b>PHASE &amp; INTERAURAL CROSSTALK</b></span></h4>
<p class="p3"><span class="s1">Since the beginning of stereo headphone listening there have been <i>crosstalk generators</i>: circuits and algorithms that attempt to recreate the sensation of speaker listening by introducing interaural crosstalk to headphone listening. Their mere existence confirms what we’ve already seen throughout this series: speaker listening adds a number of variables that don’t exist with headphone listening. As we also saw in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">previous instalment</a></strong></span>, compensating for those variables makes our mixes more resilient and thereby offers better translation across a wider range of playback systems. If we’re trying to re-introduce those variables to our mixes in order to compensate for them, it makes sense to use a crosstalk generator in our monitoring path.</span></p>
<p class="p3"><span class="s1"><i>Or does it?</i></span></p>
<p class="p3"><span class="s1">No. For our purposes we’re not interested in the interaural crosstalk itself – we’re interested in the <i>affect</i> it has on our mixing decisions. We can find that out by using a goniometer and a mono switch&#8230;</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="690" height="590" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/04-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="04-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/04-pichi.jpg 690w, https://www.audiotechnology.com/wp-content/uploads/2023/10/04-pichi-600x513.jpg 600w" sizes="(max-width: 690px) 100vw, 690px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><span class="s1"><b>Goniometer &amp; Phase Correlation</b></span></h4>
<p class="p3"><span class="s1">The goniometer provides a visual indication of polarity and phase differences between the left and right channels of a stereo mix, hence it is often referred to as a <i>phase scope</i>, a <i>phase meter</i> or a <i>phase correlation meter</i> – although the latter term usually refers to a much simpler meter that has a linear scale from -1 to +1, as shown below:</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="837" height="284" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/05-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="05-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/05-pichi.jpg 837w, https://www.audiotechnology.com/wp-content/uploads/2023/10/05-pichi-800x271.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/05-pichi-768x261.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/05-pichi-600x204.jpg 600w" sizes="(max-width: 837px) 100vw, 837px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">If the correlation indicator spends a lot of time between -1 and zero it means there are serious phase and/or polarity issues in the mix; those sorts of problems would probably be audible in speaker listening (due to interaural crosstalk) but can be very hard to notice in headphones.</span></p>
<p class="p3"><span class="s1">The phase correlation meter shown above is <i>almost but not quite</i> as helpful as the goniometer for our purposes: it shows the total correlation of the left and right channel signals, but its one-dimensional display and slower weighting prevents us from easily seeing into the mix and finding out which individual signals are correlating and which signals are not. So we’re back to the goniometer, which moves fast enough and in enough dimensions for us to identify individual sounds within the mix.</span></p>
<p class="p3"><span class="s1">A small dot, typically green or blue (a nod to the cathode ray screens used in early goniometers), is moved around the screen using the instantaneous magnitudes and polarities of the left and right audio signals as rectangular coordinates – rather like a high speed game of Battleship but where ‘0,0’ is the centre of the board. The rapidly moving dot leaves a momentary trail of light, or <i>trace</i>, behind it that is sometimes referred to as a ‘Lissajous figure’ or ‘Lissajous curve’. It provides helpful insights into the instantaneous polarity and phase relationships of the left and right channels of our mix and how they might interact due to interaural crosstalk, but <i>only</i> if we know how to interpret it. Here’s how…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="732" height="731" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/06-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="06-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/06-pichi.jpg 732w, https://www.audiotechnology.com/wp-content/uploads/2023/10/06-pichi-300x300.jpg 300w, https://www.audiotechnology.com/wp-content/uploads/2023/10/06-pichi-600x599.jpg 600w, https://www.audiotechnology.com/wp-content/uploads/2023/10/06-pichi-100x100.jpg 100w" sizes="(max-width: 732px) 100vw, 732px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">The goniometer’s display is divided into four equal-sized quarters, or ‘quadrants’, as shown above. </span><span class="s1">The top and bottom quadrants (shaded in green) represent points in the mix when the two channels have the same polarity and will combine <i>constructively</i> – in other words, if we added their amplitudes together the resulting magnitude will be higher than the highest of the two individual channel magnitudes at that point in time. When the trace (represented here as a blue dot in the centre) is in either of these quadrants it means both channels are simultaneously pushing the signal towards us or pulling it away from us, working together to create a very stable phantom image with better impact.</span></p>
<p><span class="s1">The side quadrants (shaded in red) represent moments in the mix when the two channels have opposing polarities and will combine <i>destructively</i> – in other words, if we added their amplitudes together the resulting magnitude will be lower than the highest of the two individual channel magnitudes at that point in time. When the trace is in either of these quadrants it means one channel of the stereo mix is pushing the signal towards us while the other channel is pulling it away from us, resulting in a vague phantom image without much impact.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1595296124081 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner vc_custom_1595990674300"><div class="wpb_wrapper"><div id="bsa-block-970--450" class="bsaProContainerNew bsaProContainer-86 bsa-block-970--450 bsa-pro-col-1" style="display: block !important"><div class="bsaProItems bsaGridNoGutter " style="background-color:"><div class="bsaProItem bsaReset" data-animation="fadeIn" style=""><div class="bsaProItemInner" style="background-color:"><div class="bsaProItemInner__thumb"><div class="bsaProAnimateThumb" style="display: block;margin: auto;"><a class="bsaProItem__url" href="https://www.audiotechnology.com/advertise?sid=86&bsa_pro_id=828&bsa_pro_url=1" target="_blank"><div class="bsaProItemInner__img" style="background-image: url(&#39;https://www.audiotechnology.com/wp-content/uploads/bsa-pro-upload/1691035019-Australis_LAB GRUPPEN_DA-pichi.jpg&#39;)"></div></a></div></div></div></div></div></div><script>
			(function($){
				function bsaProResize() {
					var sid = "86";
					var object = $(".bsaProContainer-" + sid);
					var imageThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__img");
					var animateThumb = $(".bsaProContainer-" + sid + " .bsaProAnimateThumb");
					var innerThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__thumb");
					var parentWidth = "970";
					var parentHeight = "450";
					var objectWidth = object.parent().outerWidth();
//					var objectWidth = object.width();
					if ( objectWidth <= parentWidth ) {
						var scale = objectWidth / parentWidth;
						if ( objectWidth > 0 && objectWidth !== 100 && scale > 0 ) {
							animateThumb.height(parentHeight * scale);
							innerThumb.height(parentHeight * scale);
							imageThumb.height(parentHeight * scale);
							object.height(parentHeight * scale);
						} else {
							animateThumb.height(parentHeight);
							innerThumb.height(parentHeight);
							imageThumb.height(parentHeight);
							object.height(parentHeight);
						}
					} else {
						animateThumb.height(parentHeight);
						innerThumb.height(parentHeight);
						imageThumb.height(parentHeight);
						object.height(parentHeight);
					}
				}
				$(document).ready(function(){
					bsaProResize();
					$(window).resize(function(){
						bsaProResize();
					});
				});
			})(jQuery);
		</script>						<script>
							(function ($) {
								var bsaProContainer = $('.bsaProContainer-86');
								var number_show_ads = "0";
								var number_hide_ads = "0";
								if ( number_show_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeIn(); }, number_show_ads * 1000);
								}
								if ( number_hide_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeOut(); }, number_hide_ads * 1000);
								}
							})(jQuery);
						</script>
						</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">When the dot ventures into either of the side quadrants it means there is an instantaneous polarity difference between the two channels due to either a polarity difference or a significant phase difference between two or more sounds within the mix. That’s <i>exactly</i> the kind of problem we need the goniometer to expose because it is difficult to identify when mixing in headphones but is readily noticeable when heard through speakers – assuming we know what to listen for. If a significant portion of the mix ventures into the side quadrants of the goniometer we should check the mix in mono through headphones or through a stereo speaker system; if there is a clearly audible problem in mono then we need to find the cause and fix it.</span></p>
<p class="p3"><span class="s1">Note that many reverberation and similar stereo time-based effects will create phase and polarity differences between channels as part of their effect, and in these cases it is up to us to decide if it’s a problem or not. If we mute and un-mute the effect repeatedly while watching the goniometer we should be able to identify what is going on and decide whether it’s a problem or not.</span></p>
<p class="p3"><span class="s1">The illustration below shows a number of goniometer displays and what they mean…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1012" height="373" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/07-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="07-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/07-pichi.jpg 1012w, https://www.audiotechnology.com/wp-content/uploads/2023/10/07-pichi-800x295.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/07-pichi-768x283.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/07-pichi-600x221.jpg 600w" sizes="(max-width: 1012px) 100vw, 1012px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">If we spend enough time observing different professionally mixed and mastered recordings on the goniometer while listening through speakers, we will notice an interesting trend. Mixes that stay mostly in the upper and lower quadrants will tend to sound clean and solid due to their good stereo correlation (where both channels are reinforcing each other), and will probably not change significantly when summed to mono. Mixes that have a lot of information in the side quadrants will tend to sound messy and vague due to their low stereo correlation (where the channels are diminishing each other), and will change significantly when summed to mono. [The descriptive terms given above may seem over-dramatic, but they will make sense to anyone who has spent enough time watching a goniometer while listening to many different recordings: mixes with good stereo correlation leave a different sonic fingerprint than mixes with poor stereo correlation.]</span></p>
<p class="p3"><span class="s1">The top quadrant of the goniometer also serves as a panning meter, as shown below. A single mono sound source panned hard left will appear as a diagonal line from the upper left to the lower right of the screen. Conversely, a single mono sound panned hard right will appear as a diagonal line from the lower left to the upper right of the screen. A sound panned to the centre will be a vertical line from top to bottom. If you pan a mono sound source from left to right, you should see a single straight line rotating from hard left (45° left of centre) to hard right (45° right of centre) on the goniometer.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1012" height="555" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/08-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="08-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/08-pichi.jpg 1012w, https://www.audiotechnology.com/wp-content/uploads/2023/10/08-pichi-800x439.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/08-pichi-768x421.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/08-pichi-600x329.jpg 600w" sizes="(max-width: 1012px) 100vw, 1012px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">When mixing, it is always worthwhile soloing the individual tracks/channels and checking them on the goniometer. If all of the individual sounds (mono or stereo) stay within the upper and lower quadrants, and the only things that enter the side quadrants are spatial effects like reverberation, room mics and similar that <em>rely</em> on phase differences or arrival time differences between channels to create their effect, your mix is probably going to translate well to speakers and other headphone systems.</span></p>
<p class="p3"><span class="s1">The goniometer is particularly helpful when setting up drum overheads using two widely spaced microphones. If you solo the overhead mics (while panned hard left and hard right) and alter the spacing of the two microphones just enough to reduce the amount of signal getting into the side quadrants, the overall drum mix will benefit when heard through speakers or summed to mono because the overheads are reinforcing the overall drum sound rather than diminishing it.</span></p>
<h4 class="p3"><span class="s1"><b>Mono Switch</b></span></h4>
<p class="p3"><span class="s1">The mono switch can be very helpful for making a ‘worst case’ version of your mix and highlight (if not <i>exaggerate</i>) any interaural crosstalk problems that might exist when the mix is heard through speakers.</span></p>
<p class="p3"><span class="s1">Most mixing consoles – whether hardware or software – include a mono switch that sums the stereo bus to mono. If not, it will be available on a plug-in that you can insert over the stereo mix bus and switch on and off as desired.</span></p>
<h4 class="p3"><span class="s1"><b>DESKTOP MONITORS</b></span></h4>
<p class="p3"><span class="s1">Throughout this series we’ve discussed how to mix on headphones and thereby avoid the need for big studio monitors and the acoustic treatments required to make the most of them. We’ve discussed the ‘variable compensations’ that intrinsically happen when mixing on speakers but not when mixing on headphones, and we’ve discussed ways of emulating and/or building them into our headphone mixes.</span></p>
<p class="p3"><span class="s1">If we <i>really</i> want to make ‘market relevant’ headphone mixes that also translate well to speaker playback, it makes sense to have some speakers in our monitoring chain as a cross-referencing tool. They don’t need to be expensive big monitors with a flat frequency response and good low frequency extension, and they don’t need to be super accurate – headphones easily satisfy all of those requirements at a fraction of the price of big monitors and their associated room acoustic treatments. The main things the desktop monitors need to do are reveal how the individual sounds in our mix will interact with each other when combined in the air, while also confirming panning decisions, and helping us to find the right balance for reverbs and other spatial effects that are difficult to judge in headphones. This means the main requirement for the desktop monitors is to image well, and few speakers image as well as single wide-range drivers in small enclosures such as those offered by Auratone, Grover Notting et al…</span></p>
<p class="p3"><span class="s1">When configured in an equilateral triangle with the listener, as detailed in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">previous instalment</a></strong></span>, and with an appropriate absorptive material on our work surface to minimise comb filtering due to first order reflections off the work surface, these small speakers can provide a remarkably useful spatial reference for checking panning, reverberation levels and other spatial decisions that are difficult to judge on headphones. In essence, they fill in the gaps between headphone mixing and speaker mixing without resorting to expensive big monitors and the room acoustic treatments that are inevitably required to make the most of them.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="593" height="779" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/09-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="09-pichi" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">It would be great if we could use the speakers built into our laptops or tablets for this purpose, but those speakers cannot be trusted for panning and spatial decisions. Contemporary mobile devices have remarkably good sound quality for their size, but their internal speaker systems often include built-in spatial processing that’s designed to ‘throw’ the stereo image wider than the device itself. This allows built-in speakers that are typically less than 30cm apart to create a stereo soundstage that spreads to about 55cm apart (ie. about ±30° wide for the listener, as required for stereo speaker listening) when the user is at a typical viewing/working distance from the screen. It does this using clever manipulations of the stereo signal to fool the listener into perceiving a wider soundstage than seems possible under the circumstances. This spatial processing provides an impressive speaker <em>listening</em> experience for music and movies, but we cannot trust it for speaker <i>mixing</i> because it is exaggerating every panning and spatial decision we make to suit the device’s specific speaker placements and its specific spatial processing, which means there is no guarantee that our panning and spatial decisions will translate well to other systems. No matter how familiar we are with the <i>tonality</i> of our portable device’s sound, things get very different when we try to make <i>spatial decisions</i> with it because some things will be exaggerated and thereby mislead us to under-compensate, and other things will be downplayed and thereby mislead us to over-compensate.</span></p>
<p class="p3"><span class="s1">This brings us back to a small pair of single-driver desktop monitors that take up little space on the desk and are not intended to be anything other than spatial cross-referencing monitors. <i>That’s</i> what we need…</span></p>
<h4 class="p3"><span class="s1"><b>REFERENCE TRACKS</b></span></h4>
<p class="p3"><span class="s1">There are two reference tracks we should have for every headphone mix.</span></p>
<p class="p3"><span class="s1">The first is a stereo imaging test, the sort that’s widely available for testing hi-fi systems and can be found on-line and on every audiophile test disc ever made [Google ‘stereo imaging test’]. Ideally it will have tone bursts or dialogue panned to specific locations within the stereo mix. Listening to this allows us to ‘settle in’ to the stereo soundstage we’re working within, identifying the locations of the five most important reference points – hard left, mid-left, centre, mid-right, and hard right – and familiarising ourselves with where those locations appear in the soundstage created by our chosen headphones.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">As discussed in the previous instalment, we know that wherever hard left appears in our headphones will appear at 30° left-of-centre when heard on speakers, and wherever hard right appears in our headphones will appear at 30° right-of-centre when heard on speakers. This allows us to create a ‘panning map’ of where things should be panned in the headphones based on where we want them to appear when/if heard on speakers.</span></p>
<p class="p3"><span class="s1">The second reference track is a musical reference for perspective. This should be a well-engineered recording of a similar style, genre, balance or production as the mix we’re preparing to make. Note here that ‘well-engineered’ <i>actually means</i> ‘well-engineered’ – in other words, something that has been well-recorded, well-mixed and well-mastered. Just because you like it doesn’t mean it is well-engineered; neither does its commercial success or how many awards it has won. If you can hear all of the sounds in the mix clearly at all times, it has probably been recorded, mixed and mastered well. If the vocal and solo performances are the only sounds that can be clearly heard at the times they occur during the mix, you’re listening to a poorly engineered mix that’s been cleverly mastered to keep the listener’s attention focused on the main instruments and away from the poor mix taking place behind them. In professional audio parlance this is known as a ‘polished turd’; mixes like this keep a lot of mastering engineers and multi-band compressor manufacturers/developers in business, but are never good references…</span></p>
<p class="p3"><span class="s1">There are recordings in every genre that are considered ‘well-engineered’, and there are recordings from similar genres that are close enough aesthetically (ie. similar tonalities and balances of individual sound sources, and similar use of effects) to serve as references. As we’ll see later, this reference is something we will be regularly comparing our mix-in-progress against to make sure we are remaining within the tonal and spatial ballpark of the genre’s aesthetic. Hopefully our finished mixes should not require too much corrective work in mastering, thereby freeing up more time for the creative aspects of mastering.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-5440" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-5440 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >This allows us to create a panning map…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-9436" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-9436 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="917" height="645" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/10-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="10-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/10-pichi.jpg 917w, https://www.audiotechnology.com/wp-content/uploads/2023/10/10-pichi-800x563.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/10-pichi-768x540.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/10-pichi-600x422.jpg 600w" sizes="(max-width: 917px) 100vw, 917px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><span class="s1"><b>MIXING WITH HEADPHONES TEMPLATE</b></span></h4>
<p class="p3"><span class="s1">Now that we have our headphone mixing essentials together – good headphones, a goniometer, mono switching, a spectrum analyser with 6dB guide, a stereo imaging reference track, a musical genre reference track, and hopefully a pair of small desktop monitors as described earlier – we need to create a ‘Mixing With Headphones’ template session that we can use for all of our headphone mixing.</span></p>
<p class="p3"><span class="s1">This is essentially an ‘empty’ session file with everything we need in place, so that all we have to do is load the tracks and start mixing – unless we decide to record our session directly into the template.</span></p>
<p class="p1"><span class="s1">We’ll start by setting up a channel strip that we can duplicate as often as we need. We need to configure the channel strip as shown below, with three EQ plug-ins and one compressor plug-in. We will set up the channel strip following the traditional analogue studio approach: plug-ins that create a <i>replacement of the original signal</i> (eg. EQ and compression) are inserted directly into the channel strip, while plug-ins that create something that needs to be <i>mixed with the original signal</i> (eg. delays, echoes and reverberation) are connected via auxiliary sends and brought back into the mix through their own channels where we can EQ them and/or send them to other effects if desired.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="480" height="654" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/11-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="11-pichi" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">The first plug-in is a <i>corrective EQ</i> that is there to clean up any sounds before further processing them. It should be a clean EQ that is not intended to impart any tonality or character of its own on the sound. The emphasis here is an EQ that is capable and versatile rather than euphonic. A four-band fully parametric EQ with high and low pass filtering and the option to switch the lowest and highest bands to shelving is a good choice here.</span></p>
<p class="p1"><span class="s1">The second plug-in is an <i>enhancing EQ</i> that we will use to ‘create’ the sound we want. This can be an EQ with character to introduce some euphonics into the sound if desired, and it is absolutely okay to start off with the same ‘character’ EQ in every channel strip. Remember, all of the famous analogue mixing consoles throughout history offered their own EQ and it was <em>the same in every channel strip</em>. That didn’t stop anyone from making great records that are still revered today, so don’t get too hung up about having lots of different EQ plug-in options. Leave that distracting bullshit on Youtube where it belongs and get the mix started. You can change the EQ plug-in later if desired, just as we did in the analogue studio world where we would track on a Neve to get that warm musical Neve sound and then mix on an SSL to add that big and macho SSL sound: the best of both worlds, but with only two EQs overall (Neve for tracking, SSL for mixing). </span><span class="s1">“I love the sound of that combination of different EQs, that’s why I bought this record”, said nobody ever — except for sound engineers, recording musicians, and their too-old-for-trainsets Youtubey ilk.</span></p>
<p class="p1"><span class="s1">The third plug-in is a <i>corrective compressor</i>, the sort that has controls for threshold, ratio, attack and release times, and an output level control. As with the <em>corrective EQ</em>, we don’t want something that’s going to add any particular character. We can swap it for something different during the mix if necessary, but to get the mix started we just need something to get the track’s dynamics under control in a predictable manner.</span></p>
<p class="p1"><span class="s1">The fourth plug-in is an <i>integrating EQ</i>. It’s job is to help us integrate the sound from the channel strip into the mix’s tonal perspective, and should be a similar choice as the first EQ because its job is corrective. The detailed application of these four plug-ins will be explained in the following instalment.</span></p>
<p class="p1"><span class="s1">Now that we’ve got the channel plug-ins sorted, we need to get the required metering and monitoring capabilities in place over the mix bus. We want to start with a mono switch, which might be available within the DAW. Ideally, all of the other metering tools will be placed <i>after</i> the mono switch so that we can <em>see</em> the effect of the mono switch in the metering, rather than just hearing it. </span><span class="s1">We also need a goniometer, a spectrum analyser with 6dB guide, and bus metering that shows levels with LUFS and dBTP.</span></p>
<p class="p3"><span class="s1">Insert the mono switch (if there isn’t already one in place on the stereo bus of your mixing console or DAW), the goniometer, the spectrum analyser with 6dB guide and the metering over the stereo mix bus where they are constantly monitoring whatever we’re hearing. They’ll show us the mix when we’re mixing, and they’ll show us individual tracks when we’re soloing.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="642" height="683" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/12-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="12-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/12-pichi.jpg 642w, https://www.audiotechnology.com/wp-content/uploads/2023/10/12-pichi-600x638.jpg 600w" sizes="(max-width: 642px) 100vw, 642px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">iZotope’s Ozone has always been a good choice for this type of stereo mix bus metering/monitoring because it contains a goniometer, a spectrum analyser with a 6dB guide, mono switching, and excellent level metering capabilities. Most of these metering tools will work even if the processing is bypassed or switched off, meaning they are just metering tools and won’t have any impact on our mixes unless we want them to. Other plug-in manufacturers make goniometers, stereo/mono switches and spectrum analysers with 6dB guides, so if you don’t have Ozone – or don’t like how much screen space it consumes – rifle through your arsenal of plug-ins to see what’s there.</span></p>
<p class="p3"><span class="s1">Load your reference tracks into the top tracks of your DAW. (If they have a different sampling rate than your mix you will need to run them through a sample rate conversion before loading them into the session.) These are both stereo signals and each will therefore require a stereo track (or two mono tracks panned hard left and hard right) from your DAW. Load the stereo imaging track into the first stereo track of the mixing template, and the musical reference track into the second stereo track of the mixing template. </span><span class="s1">Using clip gain or a gain plug-in, adjust the individual levels of these references tracks so that when their faders are at 0dB each track’s metered level is sitting at or around your mixing reference level on the stereo mix bus (typically -20dBFS or 0dBVU) when solo’d and should therefore be at your calibrated monitoring level of around 80dB SPL (assuming you are monitoring at your calibrated level as described in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">previous instalment</a></strong></span>).</span></p>
<p class="p3"><span class="s1">These tracks should be the first things you listen to before starting the mix – one after the other – and will acclimatise your listening to the imaging of your headphones, how they reproduce the desired tonality of the mix, and how loud you should be working. After those initial listens, these tracks will stay muted during your mixing session but will always be ready to cross-reference with a press of the return key, a touch of the solo button and perhaps a bit of fiddling with the mute key.</span></p>
<h4 class="p3"><span class="s1"><b>BRING IT ON…</b></span></h4>
<p class="p3"><span class="s1">With the ‘Mixing With Headphones’ template we have ready access to a stereo imaging reference track for determining where panned images should appear in our headphones, and a musical reference track for checking how our mix decisions compare to a known and relevant reference. We also have the goniometer to show which parts of our mix might sound weird when heard through speakers, a mono switch to check if problems seen on the goniometer will result in any audible effect, and the 6dB guide to keep us from wandering too far from the acceptable mix tonality track. We can now load all of our audio tracks into the session template – if they’re not already there – and start mixing.</span></p>
<p class="p3"><span class="s1">In the next instalment of this series we’ll look at some important considerations for mixing with headphones, along with mixing procedures and techniques that will help to land our mixes within five minutes of mastering…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"><h2 style="color: #cb0c5a;text-align: left;font-family:Source Sans Pro;font-weight:700;font-style:italic" class="vc_custom_heading" ><a href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-4">Next instalment: Mixing With Headphones 4</a></h2><div class="vc_empty_space"   style="height: 24px"><span class="vc_empty_space_inner"></span></div></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1698098589251 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><strong><span class="s1">IMPEDANCE, POWER, SENSITIVITY &amp; SPL</span></strong></h4>
<p class="p3"><span class="s1">The headphones’ <i>sensitivity</i> tells us how much SPL they will generate for a given amount of power from the amplifier; more power <i>into</i> the headphones means more SPL <i>out of</i> the headphones. However, contrary to popular assumption, we cannot <i>force</i> power into headphones (or any other electrical circuit, for that matter). An amplifier’s power rating only tells us how much power it is able to <i>provide</i>; it is up to the <i>load</i> (speaker, headphones, whatever) to take the power it needs from the amplifier – up to the maximum the amplifier can provide. [Things start going wrong when the load tries to take more power than the amplifier can provide, which is like forcing one horse to pull a cart that requires two horses. More about that later…]</span></p>
<p class="p3"><span class="s1">When it comes to headphones, the power provided by the amplifier into the headphones is the product of the <i>voltage</i> at the output of the amplifier and the <i>current</i> drawn from the amplifier by the headphones – which is determined by their <i>impedance</i>. The relationship between <i>current</i>, <i>voltage</i> and <i>impedance</i> is shown in the formula below, which has been adapted from Ohm’s Law and modified to apply to headphones.</span></p>
<p class="p3"><span class="s1">I = V / Z</span></p>
<p class="p3"><span class="s1">Where V is the signal voltage at the output of the amplifier in Volts RMS, Z is the impedance of the headphones in Ohms, and I is the current that the headphones will draw from the headphone amplifier in Amps RMS.</span></p>
<p class="p3"><span class="s1">From this formula we can see that for any given voltage (V), reducing the impedance (Z) increases the current (I).</span></p>
<p class="p3"><span class="s1">The following formula shows how the voltage presented by the amplifier, and the resulting current drawn from the amplifier by the headphones, collectively determine the electrical power used by the headphones:</span></p>
<p class="p3"><span class="s1">P = V x I</span></p>
<p class="p3"><span class="s1">Where P is the power consumed by the headphones in Watts Continuous, V is the voltage at the output of the amplifier in Volts RMS, and I is the current drawn by the headphones in Amps RMS.</span></p>
<p class="p3"><span class="s1">From this formula we can see that there are two ways to increase the power consumed by the headphones: one is to increase the voltage, the other is to increase the current. With low voltage battery-powered devices there is a limit to how high we can increase the voltage (ie. the battery voltage is the maximum available without resorting to voltage multiplier circuits); beyond that, we have to increase the current. The only way we can increase the current under these circumstances is to lower the impedance of the headphones, because I = V / Z.</span></p>
<p class="p3"><span class="s1">As the formulae above show, for any given voltage, a lower headphone impedance draws more current and therefore takes more power from the amplifier. With a bit of mathematical substitution and transposition, we can summarise the above formulae and explanations with the following formula:</span></p>
<p class="p3"><span class="s1">P = V</span><span class="s3"><sup>2</sup></span><span class="s1"> / Z</span></p>
<p class="p3"><span class="s1">Where P is the power in Watts Continuous, V is the voltage in Volts RMS, and Z is the impedance in Ohms. This formula makes it clear to see that, for any given voltage (V) coming out of the amplifier, lowering the impedance of the headphones (Z) results in more power (P).</span></p>
<p class="p3"><span class="s1">The headphones’ <em>sensitivity</em> tells us how efficiently they will convert the power they take from the amplifier into SPL. There are two ways a headphone manufacturer can specify sensitivity. One way is to express it as SPL for a given power, such as 100dB/mW, which means 1mW (0.001W) of power will produce an SPL of 100dB. The other way is to express it as SPL for a given voltage, such as 100dB/V, which means if 1V RMS was applied to the headphones they would produce an SPL of 100dB (assuming the amplifier can provide sufficient current). If we know the appropriate electrical and decibel formulae we can easily convert between the two different types of sensitivity ratings; thankfully we don’t need to do that for the purposes of this discussion.</span></p>
<p class="p3"><span class="s1">In low voltage situations such as the headphone sockets in battery-powered devices, lower impedance and higher sensitivity are both desirable traits for headphones. The lower impedance results in more electrical power going <em>into</em> the headphones, and the higher sensitivity results in more SPL coming <em>out of</em> the headphones.</span></p>
<p class="p3"><span class="s1">Headphones with low sensitivity <em>and</em> high impedance are the most difficult to drive to useful SPLs when working with low voltage battery-powered devices. The result is, at best, insufficient SPL. It is also common in this situation to experience reduced low frequency reproduction (low frequencies contain the most energy and therefore require the most power, and the low-voltage headphone amplifier cannot provide it), causing us to compensate by adding too much low frequency energy to the mix. In extreme situations the sound from the headphones will feel ‘restrained’ and ‘compressed’, particularly in the low frequencies, and in worst-case scenarios it will be distorted. If you’re experiencing these situations when using a laptop’s headphone socket it means your headphones’ impedance is too high and/or their sensitivity is too low; in either case, the headphones require more power than the amplifier is able to provide. You’re going to need an external amplifier (eg. one that is built into an interface, or a dedicated headphone amplifier) or switch to headphones with higher sensitivity and/or lower impedance.</span></p>
<p class="p3"><span class="s1">Although there is no clearly defined threshold between low impedance and high impedance values for headphones, Apple (the most used brand of headphones in the USA at the time of this writing) provides a useful <span style="color: #333399;"><strong><a style="color: #333399;" href="https://support.apple.com/en-us/HT212856">reference</a></strong></span> based around a threshold of 150 ohms. They have been addressing the ‘high impedance headphone problem’ in their laptops and desktops since 2021, using an adaptive headphone amplifier circuit that senses the impedance of the connected headphones and adjusts the signal voltage accordingly (up to 1.25V RMS for impedances lower than 150 ohms, and up to 3V RMS for impedances above 150 ohms). Among other things, this <i>should</i> alleviate the need for an external headphone amplifier or interface when mixing on-the-go using high impedance headphones with Macbook Pro and Macbook Air laptops. That’s one less thing to carry around, connect, and balance on our laps. Winner, winner, chicken dinner&#8230;</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="987" height="619" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/13b-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="13b-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/13b-pichi.jpg 987w, https://www.audiotechnology.com/wp-content/uploads/2023/10/13b-pichi-800x502.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/13b-pichi-768x482.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/13b-pichi-600x376.jpg 600w" sizes="(max-width: 987px) 100vw, 987px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">In a strange but reassuring twist, the company that led the way in removing headphone sockets from smart phones (where the physical freedom of a wireless connection makes sense for commuters) is leading the way with headphone amplifiers in their laptops and desktops (where the codec-free sound quality and zero latency of a wired connection makes sense for creators).</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div>
</section><p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">Mixing With Headphones 3</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.audiotechnology.com/tutorials/mixing-with-headphones-3/feed</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Mixing With Headphones 2</title>
		<link>https://www.audiotechnology.com/tutorials/mixing-with-headphones-2</link>
					<comments>https://www.audiotechnology.com/tutorials/mixing-with-headphones-2#comments</comments>
		
		<dc:creator><![CDATA[Greg Simmons]]></dc:creator>
		<pubDate>Fri, 18 Aug 2023 00:26:26 +0000</pubDate>
				<category><![CDATA[Issue 89]]></category>
		<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[greg simmons]]></category>
		<category><![CDATA[issue]]></category>
		<category><![CDATA[Mixing With Headphones]]></category>
		<category><![CDATA[Part 2]]></category>
		<category><![CDATA[tutorial]]></category>
		<guid isPermaLink="false">https://www.audiotechnology.com/?p=76894</guid>

					<description><![CDATA[<p> [...]</p>
<p><a class="btn btn-secondary understrap-read-more-link" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">Read More...</a></p>
<p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">Mixing With Headphones 2</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element  drop-cap" >
		<div class="wpb_wrapper">
			<p><span class="s1">In the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-1">first instalment</a></strong></span> of this six-part series we explored the ascent of headphone listening, culminating in the current situation where headphone listening has supplanted speaker listening for the vast majority of music purchasing decisions and active music consumption. As audio professionals we would be foolish to underrate the significance of headphones in our mixing and monitoring decisions, but how do we reduce our reliance on an institutionalised technology – speakers – that has ultimately become irrelevant to the majority of the music consuming market? We can’t simply announce that we’re abandoning speakers for headphones, because there are significant differences between mixing through speakers and mixing through headphones.</span></p>
<p><span class="s1">Speaker reproduction brings a lot of changes to our mix; what we hear from the speakers is <em>not</em> what is coming out of the mixing console or DAW. The sound we hear at our monitoring position has had the frequency response and distortion of our speakers embedded into it, the acoustics of our listening room superimposed upon it, and possibly has comb-filtering introduced to it due to reflections off our work surfaces.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>If the tonality of your mixes does not translate acceptably to other speakers outside of your mixing room, it means your mixing decisions are being influenced by frequency response issues coming from your monitors and/or your mixing room’s acoustics. If your mixes have significantly different levels of reverberation and/or strange panning issues when heard on other speakers outside of your mixing room, it means your mixing decisions are being influenced by your mixing room’s reverberation and first order reflections. <span class="s1">If the tonality of some sounds within your mix (particularly the snare) changes significantly when you lean forward or backward from your normal mixing position it means you’ve got comb filtering off your work surface, and you’re wasting your time trying to find the right mixing position because it probably doesn’t exist…</span></p>
<p><span class="s1">Most contemporary monitor speakers provide acceptable performance within their intended bandwidth, which means the problems described above are <em>not</em> caused by your monitors, and therefore buying new monitors is <em>not</em> the solution (unless you’re living in that acoustic fantasy world where gut-shaking dance-floor subsonics come from shoebox-sized desktop speakers). T</span><span class="s1">he smart solution is to seek the advice of an acoustician. </span><span class="s1">Alternatively, you could stop relying on big monitor speakers and the acoustic treatments they require, and switch to mixing on headphones – which is, coincidentally, what this series is all about. So read on…</span></p>
<h4 class="p1"><strong><span class="s1">VARIABLE COMPENSATIONS</span></strong></h4>
<p class="p1"><span class="s1">Speaker listening introduces a lot of variables that don’t exist with headphone listening. Compensating for those variables with the tiny on-going tweaks and refinements that take place during the course of a mix – in response to cross-referencing with other speakers, changing seating posture, feedback from others inside the room but outside of the sweet spot, returning to the mix after a break, and so on – tends to make our mixes more resilient and thereby improves their translation across numerous playback systems.</span></p>
<p class="p1"><span class="s1">A mix made <em>only</em> on speakers will usually need very little tweaking to sound ‘right’ when heard through headphones, even though it might not take advantage of all that headphones have to offer. A mix made <em>only</em> on headphones can take advantage of all that headphones have to offer, but will often need considerable tweaking to sound ‘right’ when heard through speakers.</span></p>
<p class="p1"><span class="s1">How can we make ‘market relevant’ mixes that exploit headphone’s strengths without losing the ‘tweaking-for-the-variables’ benefits that speaker mixing introduces? We can start by understanding a) how human hearing works, b) what we hear and feel when listening to speakers, and c) what we <em>don’t</em> hear and feel when listening to headphones…</span></p>
<h4 class="p1"><span class="s1"><b>HOW DOES HUMAN HEARING WORK?</b></span></h4>
<p class="p1"><span class="s1">Human beings have two ears, one on either side of the head, to capture two slightly different versions of the same sound. The ear/brain system uses the differences between these two versions of the same sound to determine where that sound is coming from in a process called ‘localisation’.</span></p>
<p class="p1"><span class="s1">The illustration below shows a listener receiving sound information from a sound source located to the left of centre. There are three ‘difference’ mechanisms the ear/brain system uses to localise the sound source, and, for this example, they all occur because the right ear is further from the sound source than the left ear.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-6689" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-6689 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >…what we hear from the speakers is not what is coming out of the mixing console or DAW.</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-7498" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-7498 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="806" height="627" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎01-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎01-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎01-pichi.jpg 806w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎01-pichi-800x622.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎01-pichi-768x597.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎01-pichi-600x467.jpg 600w" sizes="(max-width: 806px) 100vw, 806px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">Firstly, the sound will arrive at the right ear a short time after it arrives at the left ear, creating an Interaural Time Difference (ITD) – which is sometimes referred to as an Interaural Phase Difference (IPD), particularly at lower frequencies where the wavelength is longer than the width of the average human head and therefore the time difference occurs within one cycle.</span></p>
<p class="p1"><span class="s1">Secondly, the signal arriving at the right ear has travelled further than the signal arriving at the left ear and will therefore have a lower SPL due to the Inverse Square Law. This creates an Interaural Amplitude Difference (IAD) – which is sometimes referred to as an Interaural Level Difference (ILD).</span></p>
<p class="p1"><span class="s1">Thirdly, because the signal at the right ear travels across the listener’s face and enters the right pinna from a different angle than it enters the left pinna, the signal arriving at the right ear will have a different frequency spectrum than the signal arriving at the left ear due to ‘acoustic shadowing’ of the head, hair absorption, skin reflections, diffraction across the face, and the numerous comb filters and cavity resonances introduced by the pinna. All of these result in an Interaural Spectral Difference (ISD).</span></p>
<p class="p1"><span class="s1">Collectively, the ITDs, IADs and ISDs are referred to as ‘HRTFs’ (Head Related Transfer Functions), because they represent the changes imposed on the signal as it passes around the listener’s head and into their ears. The ear/brain system uses the differences between the left and right HRTFs to determine where a sound is coming from, i.e. to localise it.</span></p>
<h4 class="p1"><span class="s1"><b>Loudness vs Frequency</b></span></h4>
<p class="p1"><span class="s1">An important quirk of human hearing is that its sensitivity to individual frequencies changes with the SPL. At lower SPLs we are less sensitive to low and high frequencies than we are to midrange frequencies, while at higher SPLs our sensitivity to the low and high frequencies increases significantly.</span></p>
<p class="p1"><span class="s1">This behaviour is shown in the graph below, which contains a number of ‘Equal Loudness Contours’. Each contour uses a 1kHz tone at a stated SPL as a reference, and shows how much SPL is required for other frequencies to be perceived as being ‘equally as loud’ as the 1kHz reference.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="672" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎02-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎02-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎02-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎02-pichi-600x594.jpg 600w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎02-pichi-100x100.jpg 100w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">Each contour is labelled with a Phon value, which represents the SPL of the 1kHz reference tone. For example, the 80 Phon contour shows the SPLs required for different frequencies to be perceived as being ‘equally as loud’ as a 1kHz tone that is being reproduced at 80dB SPL. As shown in the graph below, 125Hz will need an SPL of approximately 89dB to be perceived as being ‘equally as loud’ as 1kHz at 80dB SPL. Similarly, 8kHz would need an SPL of approximately 92dB to be perceived as being ‘equally as loud’ as 1kHz at 80dB SPL.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="671" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎03-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎03-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎03-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎03-pichi-600x593.jpg 600w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎03-pichi-100x100.jpg 100w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>To put it another way, let’s say we had three separate sine wave oscillators: one generating a 1kHz tone, one generating a 125Hz tone and one generating an 8kHz tone. If the 1kHz oscillator’s output was adjusted to provide an SPL of 80dB, the 125Hz oscillator’s output would need to be 9dB higher than the 1kHz oscillator to sound like it is ‘equally as loud’, and the 8kHz oscillator’s output would need be 12dB higher than the 1kHz oscillator to sound like it is ‘equally as loud’. So a 125Hz tone at 89dB SPL, a 1kHz tone at 80dB SPL and an 8kHz tone at 92dB SPL will all have ‘equal loudness’ – but those differences only apply when we’re on the 80 Phons curve (i.e. 1kHz at 80dB SPL). If we change the SPL of the 1kHz tone, the <em>differences</em> required for other frequencies to sound ‘equally as loud’ will also change, as seen by the differing shapes of the Equal Loudness Contours. If they were all the same shape we wouldn’t have to think about how our mix will translate to different playback levels…</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="724" height="567" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎04-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎04-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎04-pichi.jpg 724w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎04-pichi-600x470.jpg 600w" sizes="(max-width: 724px) 100vw, 724px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">The Equal Loudness Contours show us that as the SPL gets higher, the ear becomes more sensitive to low and high frequencies. This means that a mix made at a high monitoring level will sound lacking in low and high frequency energy when heard at a low monitoring level, and a mix made at a low monitoring level will have excessive low and high frequency energy when heard at a higher monitoring level. In other words, the frequency spectrum and tonal balance of our mixes is affected by the monitoring level used when mixing.</span></p>
<p class="p1"><span class="s1">The illustration below shows what happens if we mix at a high monitoring level but play back at a low monitoring level. The blue contour represents the balance of frequencies in a mix made at a high monitoring level of 100 Phons, and the red contour represents how much energy is needed for that mix to have the same perceived frequency balance (i.e. sound the same) when heard at a low monitoring level of 40 Phons. </span>Any frequencies on the blue contour that are <em>below</em> the red contour will be <em>quieter</em> than intended in the mix, and any frequencies on the blue contour that are above the red contour will be <em>louder</em> than intended in the mix.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="671" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/05-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="05-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/05-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/08/05-pichi-600x593.jpg 600w, https://www.audiotechnology.com/wp-content/uploads/2023/08/05-pichi-100x100.jpg 100w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>From the graph above we can see that a mix made at a high monitoring level will be seriously lacking in low and high frequency energy if heard at a low monitoring level. As a matter of interest, this is <em>exactly</em> why some hi-fi systems have a &#8216;loudness&#8217; button: it boosts the low and high frequencies in a way that looks very similar to the differences between the blue and red contours shown above, allowing the music to be heard at a very low level (to avoid waking up the family, for example) while still having a sufficient <em>perceived</em> balance of low and high frequencies.</p>
<p class="p1"><span class="s1">The same problem occurs the other way around, as shown below. </span><span class="s1">The blue contour represents the balance of frequencies in a mix made at a low monitoring level of 40 Phons, and the red contour represents how much energy is needed for that mix to have the same perceived frequency balance (i.e. sound the same) when replayed at a high monitoring level of 100 Phons. </span>Any frequencies on the blue contour that are <em>above</em> the red contour will be <em>louder</em> than intended in the mix, and any frequencies on the blue contour that are below the red contour will be <em>quieter</em> than intended in the mix. We can see that a mix made at a low monitoring level will have excessive low and high energy if replayed at a high monitoring level.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="717" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎06-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎06-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎06-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎06-pichi-600x634.jpg 600w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">Neither of the mixes shown above will translate well to different playback levels – even through the monitors they were mixed on – and both will require corrective EQ in mastering or perhaps even a re-mix, meaning more time and more cost. Mixing loud is proud, mixing quiet is polite, but in both cases you’re mixing on credit: it feels good now but you’ll be paying for it later because for everything else there’s mastering card…</span></p>
<p class="p1"><span class="s1">It’s also worth noting that over a long day of mixing our hearing mechanisms become tired, or ‘fatigued’, and this causes us to inadvertently turn up the monitoring level so that things continue to sound exciting. Although our hearing mechanisms suffer from fatigue the Equal Loudness contours remain the same, so by increasing the monitoring level we are inadvertently shifting our hearing ‘baseline’ up to a higher Equal Loudness contour – incurring all the problems that come with that. If you were to mix five songs over a 15 hour day in the studio, it would not be surprising to find that during playback the next day (with rested hearing) the first mix made the day before sounds good, but the last mix made the day before is considerably lacking in low and high frequencies. Why? Because the monitoring level was regularly increasing throughout the day that the mixes were made, so that the last mix was made on a very different equal loudness contour than the first mix.</span></p>
<p class="p1"><span class="s1">For these reasons, professional audio facilities calibrate all of their monitoring systems to a standard SPL, typically somewhere around 80 Phons (e.g. the monitoring volume control is adjusted so that a 1kHz tone at -20dBFS or 0dB VU on the stereo bus creates an SPL of 80dB at the monitoring position), which helps to maintain spectral consistency from mix to mix and within a range of playback levels from about 60 Phons to 100 Phons.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="705" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎07-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎07-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎07-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎07-pichi-600x623.jpg 600w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The illustration above shows how a mix made at a monitoring level of 80 Phons (blue) translates to higher monitoring levels (100 Phons, green) and lower monitoring levels (60 Phons, red). In both cases, the perceived differences above 500Hz are insignificant. Below 500Hz, when the 80 Phons mix is replayed at 100 Phons there will be a gradual increase in perceived low frequency energy (rising to +10dB at 31.5Hz), and when the 80 Phons mix is replayed at 60 Phons there will be a gradual decrease in perceived low frequency energy (falling to -10dB at 31.5Hz). These changes are not ideal, but they’re acceptable considering they occur over a range of 40dB (from 60 Phons to 100 Phons) and retain very high consistency above 500Hz throughout that range.</p>
<p class="p1"><span class="s1">80 Phons or thereabouts is also a good level for minimising short-term hearing fatigue and long-term hearing damage, and acts as a warning sign: if the calibrated monitoring volume is not feeling loud enough during a long session, it means either a) the engineer’s hearing is becoming fatigued and it’s time to take a break, or b) the metered level of the mix is lower than the chosen calibration level (typically averaging -20dBFS or 0dB VU) and should be adjusted or compensated for accordingly.</span></p>
<p class="p1"><span class="s1">Using a calibrated monitoring level streamlines the entire process from recording to mastering, improves ‘mix confidence’ and translation, and removes the dreaded ‘cold light of day’ disappointment, i.e. the mix that sounded amazing at the end of a long day in the studio sounds underwhelming and disappointing when heard the next morning through fresh ears and at a more civilised (i.e. lower) playback level.</span></p>
<p><span class="s1">We should always be aware of our monitoring levels, regardless of whether we’re using speakers or headphones. More about that in the final instalment of this series…</span></p>
<p class="p1"><span class="s1">As a matter of interest, turning the Equal Loudness Contours upside down, as shown below, allows them to be considered as statistically averaged frequency response graphs of the human ear. This makes it easier to see how the frequency sensitivity of human hearing changes with the SPL.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="672" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎08-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎08-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎08-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎08-pichi-600x594.jpg 600w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎08-pichi-100x100.jpg 100w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p1"><span class="s1"><b>WHAT DO WE HEAR WITH SPEAKERS?</b></span></h4>
<p class="p1"><span class="s1">The illustration below shows the correct configuration for stereo reproduction through speakers, where the acoustic centres of the monitor speakers form two points of an equilateral triangle, and the listener is aligned with the third point. The stereo image is therefore capable of extending across 60° in front of the listener (±30° either side of centre).</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="803" height="634" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎09-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎09-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎09-pichi.jpg 803w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎09-pichi-800x632.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎09-pichi-768x606.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎09-pichi-600x474.jpg 600w" sizes="(max-width: 803px) 100vw, 803px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1595296124081 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner vc_custom_1595990674300"><div class="wpb_wrapper"><div id="bsa-block-970--450" class="bsaProContainerNew bsaProContainer-86 bsa-block-970--450 bsa-pro-col-1" style="display: block !important"><div class="bsaProItems bsaGridNoGutter " style="background-color:"><div class="bsaProItem bsaReset" data-animation="fadeIn" style=""><div class="bsaProItemInner" style="background-color:"><div class="bsaProItemInner__thumb"><div class="bsaProAnimateThumb" style="display: block;margin: auto;"><a class="bsaProItem__url" href="https://www.audiotechnology.com/advertise?sid=86&bsa_pro_id=851&bsa_pro_url=1" target="_blank"><div class="bsaProItemInner__img" style="background-image: url(&#39;https://www.audiotechnology.com/wp-content/uploads/bsa-pro-upload/1697607643-AmberTech_DPA_DA-min.gif&#39;)"></div></a></div></div></div></div></div></div><script>
			(function($){
				function bsaProResize() {
					var sid = "86";
					var object = $(".bsaProContainer-" + sid);
					var imageThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__img");
					var animateThumb = $(".bsaProContainer-" + sid + " .bsaProAnimateThumb");
					var innerThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__thumb");
					var parentWidth = "970";
					var parentHeight = "450";
					var objectWidth = object.parent().outerWidth();
//					var objectWidth = object.width();
					if ( objectWidth <= parentWidth ) {
						var scale = objectWidth / parentWidth;
						if ( objectWidth > 0 && objectWidth !== 100 && scale > 0 ) {
							animateThumb.height(parentHeight * scale);
							innerThumb.height(parentHeight * scale);
							imageThumb.height(parentHeight * scale);
							object.height(parentHeight * scale);
						} else {
							animateThumb.height(parentHeight);
							innerThumb.height(parentHeight);
							imageThumb.height(parentHeight);
							object.height(parentHeight);
						}
					} else {
						animateThumb.height(parentHeight);
						innerThumb.height(parentHeight);
						imageThumb.height(parentHeight);
						object.height(parentHeight);
					}
				}
				$(document).ready(function(){
					bsaProResize();
					$(window).resize(function(){
						bsaProResize();
					});
				});
			})(jQuery);
		</script>						<script>
							(function ($) {
								var bsaProContainer = $('.bsaProContainer-86');
								var number_show_ads = "0";
								var number_hide_ads = "0";
								if ( number_show_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeIn(); }, number_show_ads * 1000);
								}
								if ( number_hide_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeOut(); }, number_hide_ads * 1000);
								}
							})(jQuery);
						</script>
						</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">The left speaker provides the IADs and ITDs that are embedded into the mix for the left ear, and the right speaker provides the IADs and ITDs that are embedded into the mix for the right ear. A listener in the proper monitoring position receives these signals in the correct relationships to re-construct the stereo image(s) contained within the recording.</span></p>
<h4 class="p1"><strong><span class="s1">Phantom Images</span></strong></h4>
<p class="p1"><span class="s1">Sound sources are easily localised anywhere in the space between the speakers in a process known as <em>phantom imaging</em>. In the example shown below, the sound source is perceived as coming from the left of centre but there is no sound source in that location. The localised sound is, therefore, a <em>phantom image</em>. When you can hear a sound source where you cannot see one in a stereo image (e.g. a vocal directly in the centre of a stereo system), you are hearing a phantom image. The ability to create a phantom image is the very core of creating a stereo soundstage; without it we’d just have sounds coming from hard left and hard right.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="803" height="592" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎10-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎10-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎10-pichi.jpg 803w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎10-pichi-800x590.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎10-pichi-768x566.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎10-pichi-600x442.jpg 600w" sizes="(max-width: 803px) 100vw, 803px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">[With the right microphone techniques and/or processing, we can make a sound source appear right in front of our noses, a long way back behind the speakers, or even extend out beyond the sides of the speakers. This type of illusion is also easily created with binaural recordings played through headphones, but that only works when heard through headphones.]</span></p>
<h4 class="p1"><span class="s1"><b>Interaural Crosstalk &amp; More…</b></span></h4>
<p class="p1"><span class="s1">Because there is no acoustic isolation between the two speakers, some of the sound intended for the left ear will reach the right ear, and vice versa. This creates a form of interaural crosstalk.</span></p>
<p class="p1"><span class="s1">Every stereo mix will contain IADs due to the use of the pan pot and/or panning effects, and it will also have IADs if it has any stereo tracks that were recorded by a coincident (e.g. XY) or near-coincident (e.g. ORTF) pair of microphones. Likewise, every stereo mix will contain ITDs due to the use of stereo time-based effects processors (reverb, delay, etc.), and it will also have ITDs if it has any stereo tracks that were recorded by near-coincident (e.g. ORTF) and/or widely spaced microphone pairs (e.g. AB, drum overheads).</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="803" height="592" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/11-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="11-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/11-pichi.jpg 803w, https://www.audiotechnology.com/wp-content/uploads/2023/08/11-pichi-800x590.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/11-pichi-768x566.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/11-pichi-600x442.jpg 600w" sizes="(max-width: 803px) 100vw, 803px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">Interaural crosstalk allows the left and right channel IADs and ITDs to blend in the air, reducing the audibility of the differences between them and thereby reducing the perceived width of the stereo image. It can also cause perceived comb filtering if the mix itself contains any delays of 25ms (0.025s) duration or less between the channels.</span></p>
<h4 class="p1"><span class="s1"><b>The Speakers &amp; The Room</b></span></h4>
<p class="p1"><span class="s1">Another challenge arises because each speaker remains a distinct sound source and therefore creates its own ITDs and IADs, ultimately telling the ear/brain system that there are really only two sound sources, which conflict with the ITDs and IADs embedded into the mix. These side-effects, and the imaging changes caused by interaural crosstalk described above, are inherently compensated for when mixing through speakers because each signal’s level and panning is adjusted as required to sound right.</span></p>
<p class="p1"><span class="s1">Listening through speakers also introduces the possibility of first order reflections from nearby surfaces in the listening space that will be superimposed over the playback and ultimately confuse any spatial information from the recording itself – such problems can greatly interfere with panning decisions. Most listening environments will also have some reverberation of their own, which ultimately affects the levels of reverberation we add to the mix (more about that below). </span><span class="s1">Room reflections and reverberation are addressed with acoustic treatment in a studio control room or in a mixing room, but can be a problem with general listening through speakers outside of the studio environment.</span></p>
<p class="p1"><span class="s1">If we introduce enough spatial information (reverberation, etc.) into our mixes the sonic presence of the speakers and the room becomes insignificant – assuming the room is acoustically acceptable to begin with. </span><span class="s1">A good mix transcends the speakers and the room, hopefully invoking a ‘willing suspension of disbelief’ – that feeling when a mix somehow transports you to another place, dimension or world where you cannot see the man behind the curtain.</span></p>
<h4 class="p1"><span class="s1"><b>Visceral Impact</b></span></h4>
<p class="p1"><span class="s1">The word ‘viscera’ refers to the soft internal organs of the human body: the lungs, heart, digestive organs, reproductive organs and so on. Therefore, ‘visceral impact’ refers to the impact the sound or mix has on the soft internal organs of our bodies; in other words, how we physically ‘feel’ the sound. Low frequency sounds have the longest wavelengths and generally the highest energy of all sounds in a mix, and therefore provide the most visceral impact.</span></p>
<p class="p1"><span class="s1">It is often said that low frequencies stimulate the adrenal glands (located above the kidneys), causing them to generate the hormone ‘adrenaline’ which is responsible for making us want to move and dance when listening to music. However, there is little research to substantiate this. If adrenaline due to visceral impact was a factor required for dancing, then silent discos, silent raves and similar events – which are all based on people dancing to music heard through headphones – would not exist.</span></p>
<h4 class="p1"><span class="s1"><b>WHAT DON’T WE HEAR WITH HEADPHONES?</b></span></h4>
<p class="p1"><span class="s1">Headphone listening differs to speaker reproduction much more significantly than most people assume. There are no listening room acoustics to alter the frequency response, there is no interaural crosstalk to mess with the stereo imaging and introduce acoustic comb filtering in the space between the speakers, and there is no visceral impact to add an enhanced/exaggerated sense of excitement.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">What we hear when mixing with headphones is the stereo mix directly from our DAW, without any external influences other than the frequency response and distortions of the headphones and the amplifier that is driving them. A good pair of headphones can consistently and reliably deliver frequencies that extend considerably above and below the range of human hearing. It therefore seems logical that a pair of low distortion headphones with a perfectly flat frequency response would provide the ultimate audio reference. Right? The answer is ‘yes’ for listening, but ‘no’ for mixing. Why not?</span></p>
<p class="p1"><span class="s1">A mix done on speakers contains compensations for those external influences as they existed in the mixing room (frequency response, room acoustics, interaural crosstalk, visceral impact, etc.), and those compensations ultimately make the mix more <em>resilient</em> – giving it better translation through a wider range of playback systems. Headphone mixing does not have those external influences and therefore our headphone mixes do not compensate or allow for them, resulting in less resilient and more ‘headphone specific’ mixes that do not translate as well to reproduction through speakers and sometimes even through other types of headphones.</span></p>
<h4 class="p1"><span class="s1"><b>ADDING RESILIENCE</b></span></h4>
<p class="p1"><span class="s1">What can we do to incorporate those valuable ‘speaker mixing’ compensations into our headphone mixing process and thereby make our headphone mixes more resilient? Let’s start by looking at what headphone manufacturers are doing with frequency responses, then we’ll look at trickier ‘hands on’ mixing problems like making sense of panning in headphones, establishing a reverberation reference when there is no mixing room, and anticipating problems that might be introduced by interaural crosstalk that doesn&#8217;t occur when monitoring in headphones.</span></p>
<h4 class="p1"><span class="s1"><b>Frequency Response &amp; Voicing</b></span></h4>
<p class="p1"><span class="s1">One of the goals of speaker manufacturers, regardless of whether their products are intended for professional or consumer use, is to create speakers with a relatively flat frequency response from 20Hz to 20kHz. Most studio monitors include their frequency response graph in the documentation that comes with them; it’s rarely a perfectly flat line but if the deviations are gradual and remain within about ±2dB throughout the intended bandwidth the monitors are considered to be acceptable and we can learn to work with them. The illustration below shows the theoretical flat response (from 20Hz to 20kHz) that most speaker manufacturers aspire to (dark red), and the ±2dB window of deviation that is generally considered acceptable (light red).</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-3132" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-3132 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >A good pair of headphones can consistently and reliably deliver frequencies that extend above and below the range of human hearing.</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-6681" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-6681 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="875" height="534" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎12-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎12-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎12-pichi.jpg 875w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎12-pichi-800x488.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎12-pichi-768x469.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎12-pichi-600x366.jpg 600w" sizes="(max-width: 875px) 100vw, 875px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">The dominance of speaker listening prior to the ascent of headphone listening means that our impression of what a flat frequency response <em>sounds like</em> has been skewed by the contribution of the listening room acoustics – to the point that, in comparison, a pair of headphones with a flat frequency response will sound excessively bright while also lacking in low frequency energy. In comparison to speakers, the sound from the headphones does not get the high frequency attenuation that occurs as sound passes through the air along with the absorption by soft furnishings in the room (hence the excessive brightness), and it does not get the low frequency enhancement from the listening room’s resonant modes (hence the lack of low frequencies).</span></p>
<p class="p1"><span class="s1">Due to these differences, contemporary headphones are not designed to have a flat frequency response. Rather, they’re ‘voiced’ (i.e. their frequency response has been moved away from the theoretical ideal of ‘flat’) so that they sound like speakers with a flat frequency response. Hence we see headphone marketeers using descriptive phrases like ‘neutral tonality’ and ‘voiced to sound natural’, rather than showing frequency response graphs – because such graphs would alarm anybody who expected to see a perfectly straight line.</span></p>
<p class="p1"><span class="s1">Here’s the concept: start with a speaker with a flat frequency response, place a measurement microphone in front of it at the distance a typical listener would be, run a frequency sweep through the speaker, and capture it with the microphone. The result is the ‘flat’ frequency response as it is reproduced by the speaker and captured at the listening position. Build that frequency response into the headphones and they <em>should</em> sound like speakers with a flat frequency response.</span></p>
<p class="p1"><span class="s1">This seems simple enough, but it raises questions about the kind of room that should be used for such measurements because the room acoustics influence the sound captured by the microphone.</span></p>
<p class="p1"><span class="s1">Throughout the 1970s it was standard practice to use an anechoic chamber, thereby creating a ‘free-field’ environment where the only sound to reach the microphone was the direct sound from the speaker with no contribution from the room itself (i.e. no resonances, no reflections and no reverberation) other than the loss of high frequencies over distance through the air. This was known as ‘free-field equalisation’ and, not surprisingly, headphones that use ‘free-field equalisation’ sound rather like listening to a speaker with a flat frequency response placed in an anechoic chamber. It was an improvement over the sound of headphones with the theoretically perfect flat response, but it still did not correlate well with speaker listening because nobody listens to speakers in an anechoic environment. The headphone designers had the right idea, but there was more work to be done…</span></p>
<p class="p1"><span class="s1">In an attempt to create something that correlated better with speakers, the 1980s saw the introduction of ‘diffuse-field equalisation’ – a method that is still popular. A ‘point-source’ loudspeaker (i.e. a speaker that radiates frequencies equally well in all directions), with a flat frequency response, is placed in a reverberation chamber rather than an anechoic chamber. A frequency sweep is reproduced by the speaker and captured by a dummy head placed at a sufficient distance to ensure it is in the diffuse field (i.e. where the room’s reverberation is the dominant sound). This measurement provides the frequency response the headphones are voiced to reproduce. Many critically-acclaimed and widely-adopted headphones conform to the diffuse-field equalisation curve.</span></p>
<p class="p1"><span class="s1">More recently, tests by Dr Sean Olive and others working for Harman International (parent company of AKG, Crown, dbx, JBL, Lexicon, Soundcraft, Studer et al) replaced the free-field environment and the diffuse-field environment with what was generally considered to be a good sounding listening room. The results were then combined with the results of tests in which numerous listeners were asked to audition and rate their preferences for numerous headphones with different frequency responses. These tests and measurements resulted in the Harman target curve (aka the ‘Harman Curve’), as shown below:</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="875" height="534" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎13-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎13-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎13-pichi.jpg 875w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎13-pichi-800x488.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎13-pichi-768x469.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎13-pichi-600x366.jpg 600w" sizes="(max-width: 875px) 100vw, 875px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">The Harman Curve is hardly ‘flat’, but it significantly closes the tonality gap between headphone and speaker reproduction, and provides very good tonal translation between them. Neumann seem to have taken this approach one step further by using their KH series monitors as the reference speakers for voicing their NDH headphones, resulting in headphones that correlate remarkably well with their monitors and minimise the ‘line-of-best-fit’ compromising that occurs when an instrument in the mix is <em>too</em> loud on one set of monitors but <em>too</em> soft on another set of monitors. This level of correlation between monitor speakers and headphones is something that very few other manufacturers can offer because most headphone manufacturers don’t make studio monitors, and most studio monitor manufacturers don’t make headphones.</span></p>
<p class="p1"><span class="s1">Mixing on headphones that are voiced this way (i.e. to sound like speakers with a flat response in a good room) will usually result in better translation to speaker playback in terms of tonality, and solves one of the major differences between speaker mixes and headphone mixes. However, it does not resolve spatial disparities such as panning and reverberation levels, and it doesn&#8217;t counter the effects of interaural crosstalk. Solving and/or compensating for those problems requires a more strategic approach…</span></p>
<h4 class="p1"><span class="s1"><b>Panning Compensation</b></span></h4>
<p class="p1"><span class="s1">As discussed earlier, speaker listening creates a stereo image that can be up to 60° wide (±30°, with 0° being directly in front of the listener). A sound panned hard left should appear at 30° to the left of centre (i.e. coming directly from the left studio monitor), and a sound panned hard right should appear at 30° to the right of centre (i.e. coming directly from the right studio monitor).</span></p>
<p class="p1"><span class="s1">In comparison, headphone listening creates a stereo image that can be up to 180° wide (±90°), depending marginally on the placement of the drivers within the ear cups. A sound that is panned hard left will be 90° to the left of the centre (coming directly from the left ear cup) and a sound that is panned hard right will be 90° to the right of centre (coming directly from the right ear cup).</span></p>
<p class="p1"><span class="s1">The difference between the widths of their stereo soundstages can be represented as a ratio of 180:60, or 3:1, meaning the soundstage width of a headphone mix needs to be approximately 3x wider than it is expected to be when heard through speakers. This is an important consideration when mixing on headphones, because a sound that is panned to 45° to the left of centre when heard in headphones will be heard at 45°/3 = 15° to the left when heard through speakers.</span></p>
<p class="p1"><span class="s1">Although headphone monitoring exaggerates panning when compared to speaker monitoring, both monitoring systems downplay panning positions when compared to the visual placement indicated by the pan pot – which rotates through a range of 270° (±135°). The panning ratios between the pan pot, headphones and studio monitors are therefore 270:180:60, or 4.5:3:1. From the point of view of a mixing engineer sitting in the stereo sweet spot, a hard left pan will be seen at 135° to the left on the pan pot, but will be heard at 90° to the left on headphones and heard at 30° to the left through studio monitors. (To add the confusion, it will be shown at 45° to the left on a <em>goniometer</em>, but more about that later…)</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="803" height="592" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎14-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎14-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎14-pichi.jpg 803w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎14-pichi-800x590.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎14-pichi-768x566.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎14-pichi-600x442.jpg 600w" sizes="(max-width: 803px) 100vw, 803px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">For any given headphones, it is always good procedure to start a mix by establishing the location of five reference points across the stereo soundstage – hard left, mid-left, centre, mid-right, hard right – and reminding ourselves of where those locations appear on the pan pot. This is especially important if we’re new to headphone mixing and intuitively pan sounds <em>by ear</em> to the same locations we’re used to hearing them when mixing through speakers. This will result in a very narrow soundstage when heard in speakers because speaker playback reduces the width of a headphone mix by a factor of 3:1, as mentioned earlier.</span></p>
<p class="p1"><span class="s1">Remember, the stereo soundstage on headphones is approximately 3x wider than it is on speakers. So, if you want a sound to appear 15° to the left (i.e. mid-left) when heard through speakers, you have to pan it 45° to the left (i.e. 3 x 15°) if mixing on headphones.</span></p>
<p class="p1"><span class="s1">One interesting aspect of headphone mixing related to panning is the location of ‘centre’. Depending on the headphones and the listener, ‘centre’ often appears to be inside the listener’s head or directly above it. To overcome this problem, many contemporary headphone designs use angled drivers and/or angled ear pads to place the drivers slightly forward of the ear canal. This allows the pinnae to create subtle ISDs that place the soundstage in front of the listener, at the possible expense of a minor reduction in the width of the stereo soundstage.</span></p>
<h4 class="p1"><span class="s1"><b>Reverberation Compensation</b></span></h4>
<p class="p1"><span class="s1">Every well-designed mixing room conforms to a reverberation curve that ensures a level of background reverberation representing an idealised real-world listening environment. Among other things, this creates a ‘reverberation reference’ to balance the levels and times of our reverberation effects against, ensuring they are not significantly lower, higher, longer or shorter than intended when the mix is taken out of the room and played in the real world.</span></p>
<p class="p1"><span class="s1">It is commonly believed that if we mix in a room that does not have enough reverberation of its own, we will add too much reverberation to our mixes to compensate. The same thinking implies that if we mix in a room that has too much reverberation of its own we won’t add enough reverberation to our mixes. Although this appears to make sense, it ignores the ear/brain’s remarkable ability to distinguish between the reverberation of the mixing room and the reverberation added to the mix. The mixing room’s reverberation is not necessarily heard as part of the mix’s reverberation, but it does provide a masking effect that the reverberation in our mixes needs to overcome. The result is as follows…</span></p>
<p class="p1"><span class="s1">If we mix through speakers in a room that has a particularly low reverberation reference, the levels of the reverberation effects we add to the mix might not be high enough because they are easily heard over the room’s reverberation reference. Likewise, the reverberation times we choose might be too short because the room’s low reverberation reference makes it easier to hear the added reverberation tails for longer.</span></p>
<p class="p1"><span class="s1">Similarly, if we mix through speakers in a room that has a particularly high reverberation reference, the levels of the reverberation effects we add to our mix might be too high in order to be heard over the room’s high reverberation reference. Likewise, the reverberation times we choose might be too long because the room’s high reverberation reference makes it harder to hear the added reverberation tails for the desired time.</span></p>
<p class="p1"><span class="s1">The illustration below illustrates this problem. The upper graph shows reverberation (green) being added to a mix in three different mixing rooms: one with a very low reverberation reference, one with a good reverberation reference, and one with a very high reverberation reference. </span><span class="s1">Each room&#8217;s reverberation reference level is shown in grey, and in each case the mix reverberation (green) has been added at an appropriate level and duration to achieve the same perceived level and duration in each room – represented by the green area above the grey areas.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="877" height="636" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎15-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎15-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎15-pichi.jpg 877w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎15-pichi-800x580.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎15-pichi-768x557.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎15-pichi-600x435.jpg 600w" sizes="(max-width: 877px) 100vw, 877px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The lower graph shows what happens when each mix is replayed in a room with a good reverberation reference. The first mix&#8217;s reverberation is inaudible, resulting in a very &#8216;dry&#8217; mix. The second mix is consistent with the upper graph. The third mix&#8217;s reverberation is too high, resulting in a very &#8216;wet&#8217; mix.</p>
<p class="p1"><span class="s1">Headphone listening is not affected by the mixing room’s acoustics and therefore has no reverberation reference, making it harder to judge and set reverberation levels and times in a way that translates well to speakers. A mix made entirely through speakers in an acoustically-designed mixing room translates to headphones with no surprises in reverberation levels or times because the reverberation effects have been balanced against the room’s reverberation reference. However, a mix made entirely through headphones could have surprising changes of reverberation levels when heard through speakers because it has been made with no reverberation reference. What sounds ‘just right’ when mixed in headphones is often too low when heard through speakers.</span></p>
<p class="p1"><span class="s1">We can solve the reverberation reference problem when mixing with headphones by using a reference track as a reality check, which we’ll talk more about in the next instalment.</span></p>
<h4 class="p1"><span class="s1"><b>Interaural Crosstalk Compensation</b></span></h4>
<p class="p1"><span class="s1">When mixing with headphones we cannot predict what changes will happen to our mix when the left and right signals combine together in the air and at the ears. As mentioned earlier, this is known as ‘interaural crosstalk’ and is an unavoidable part of speaker monitoring: some of the left channel’s signal <em>will</em> enter the right ear, and some of the right channel’s signal <em>will</em> enter the left ear.</span></p>
<p class="p1"><span class="s1">Interaural crosstalk can affect the perceived levels and panning of individual instruments in our stereo mix, it can introduce comb filtering, and it can alter the perceived level of reverberation and similar stereo time-based effects.</span></p>
<p class="p1"><span class="s1">The easiest way to check for the effects of interaural crosstalk when mixing with headphones is to check the mix in mono. This creates a ‘worst case’ crosstalk scenario (i.e. both channels are completely added together) that will exaggerate any level changes or comb filtering issues that might occur when the mix is heard through speakers. Subtle changes in individual signal levels within the balance are to be expected, but can also be indicators of hidden weaknesses in the mix that are worth addressing and fine-tuning. For example, sounds in the stereo mix that become too loud or too soft when monitored in mono are probably not at the right level in the stereo mix, and should be adjusted accordingly. A headphone mix that sounds acceptable when monitored in stereo <em>and</em> acceptable when monitored in mono stands a good chance of sounding acceptable when heard through speakers, too.</span></p>
<p class="p1"><span class="s1">Another useful tool for revealing potential interaural crosstalk problems in a headphone mix is the <em>goniometer</em>, also known as a correlation meter – a popular prop in old science fiction movies. More about that useful tool in the next instalment…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"><h2 style="color: #ddb41c;text-align: left;font-family:Source Sans Pro;font-weight:900;font-style:italic" class="vc_custom_heading" ><a href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">Next instalment: Useful tools for mixing on headphones.</a></h2></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div>
</section><p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">Mixing With Headphones 2</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.audiotechnology.com/tutorials/mixing-with-headphones-2/feed</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Mixing With Headphones 1</title>
		<link>https://www.audiotechnology.com/tutorials/mixing-with-headphones-1</link>
					<comments>https://www.audiotechnology.com/tutorials/mixing-with-headphones-1#comments</comments>
		
		<dc:creator><![CDATA[Greg Simmons]]></dc:creator>
		<pubDate>Wed, 16 Aug 2023 01:55:12 +0000</pubDate>
				<category><![CDATA[Issue 89]]></category>
		<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[greg simmons]]></category>
		<category><![CDATA[issue]]></category>
		<category><![CDATA[Mixing With Headphones]]></category>
		<category><![CDATA[Part 1]]></category>
		<category><![CDATA[tutorial]]></category>
		<guid isPermaLink="false">https://www.audiotechnology.com/?p=76892</guid>

					<description><![CDATA[<p> [...]</p>
<p><a class="btn btn-secondary understrap-read-more-link" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-1">Read More...</a></p>
<p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-1">Mixing With Headphones 1</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element  drop-cap" >
		<div class="wpb_wrapper">
			<p><span class="s1">A dozen lifetimes ago – when sound was only as malleable as plastic tape – I spent my days working with resistors and capacitors, and my evenings playing with tape recorders and analogue synths. One of the technicians I was apprenticed to, a proto hi-fi buff aware of my sonic inclinations, handed me a cassette. “Listen in headphones, with your eyes closed.” That night I donned my Pioneer SE305s, pressed play and closed my eyes. A box of matches, shaken like a maraca, circled around me before passing over my head, under my chin, and stopping at the tip of my nose. I knew there wasn’t <i>really</i> a box of matches in front of me, but with my eyes closed this ‘advanced’ audio technology was indistinguishable from magic.</span></p>
<p class="p1"><span class="s1">Eager to add this illusory effect to my electronic soundscapes, I soon learnt that it was a ‘binaural recording’ made with a ‘dummy head’: a life-sized model of a human head with a microphone mounted in each ear to capture the left and right channel signals specifically as they are heard by each ear. The illusion relied on two things. First, capturing the signal received by each ear along with its embedded <i>Head Related Transfer Functions</i> (HRTFs) – which are the changes imposed upon the sound as it passes across the face, around the head, and navigates the pinna (aka ‘ear flap’ or ‘auricle’) before entering the ear canal. Second, the signal from the left side microphone must go to the left ear <i>only</i>, and the signal from the right side microphone must go to the right ear <i>only</i>. The ear/brain system uses the differences between each ear’s HRTFs to determine the location of the sound, therefore keeping the two channels isolated is necessary for the binaural effect to work.</span></p>
<p class="p1"><span class="s1">HRTFs vary from person to person depending on the size and shape of their head and their pinnae, meaning binaural recordings are more immersive to some people than others. The dummy head’s dimensions were averaged over a lot of different head and pinnae sizes and shapes, and captured left and right channel HRTFs with sufficient differences between them to fool most people – including me. However, playing the matchbox illusion through speakers was beyond disappointing. The illusion collapsed into the space in front of me, there were numerous instances of comb-filtering as the matchbox moved around within that collapsed space, and there were various imaging anomalies as the left and right channel signals and their HRTFs combined in the air – minimising the differences between them and confusing the ears rather than fooling them. In other words, the matchbox illusion <i>only</i> worked in headphones – they were the smoke and mirrors, and without them I could not ignore the man behind the curtain.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">In those days nobody listened to headphones except audio pros, musicians in studios, Dads enjoying their stereograms without waking up the kids, and old men listening to race calls through mono earbuds jacked into pocket radios. Oh, and that weirdo standing in the train door fingering an air guitar while wearing one of those newfangled ‘Walkmans’ (a fad <i>we all knew</i> would never last), completely oblivious to the silent audience hiding behind walls of newspapers and magazines. Headphones were definitely not fashion accessories, and anyone wearing them in public looked ridiculous.</span></p>
<p class="p1"><span class="s1">Disillusioned, I abandoned my plan of imposing artificial HRTFs onto my electronic soundscapes to create immersive binaural illusions. Those illusions only worked in headphones, and <em>nobody</em> listened in headphones…</span></p>
<p class="p1"><span class="s1">A dozen lifetimes later and, thanks to the proof-of-concept provided by Sony’s Walkman and refined by Apple’s double-whammy iPod/iTunes combo, <i>everybody</i> is listening in headphones. Oh, except for that pencil-clutching weirdo in the train door immersed in the pages of one of those newfangled ‘journals’ (a diary by any other name is still a diary), device-less and oblivious to the rows of headphoned performers spot-lit by tiny screens while silently fingering air guitars, conducting orchestras with finger batons, striking out at knee drums and air cymbals, or navigating app-worlds that have artificial HRTFs imposed onto their electronic soundscapes to create immersive binaural illusions.</span></p>
<p class="p1"><span class="s1">The cynical nostalgia evoked by the ‘nobody talks to anybody any more’ memes and tropes would have you believe that in the days before mobile devices, every train carriage, every bus and every waiting room was filled with strangers striking up genial conversations and filling the air with chatter. Atavistic nonsense! Before mobile devices people isolated themselves with newspapers, magazines and window seats, intentionally filling the air with the same lack of chatter as they do now. So slip on your headphones and forget about the negative-calorie small talk that Luddites cling to like Replicants embracing implanted memories – because it never really happened. Air boom, air tish…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-1200" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-1200 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >In those days nobody listened to headphones…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-7647" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-7647 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1595296124081 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner vc_custom_1595990674300"><div class="wpb_wrapper"><div id="bsa-block-970--450" class="bsaProContainerNew bsaProContainer-86 bsa-block-970--450 bsa-pro-col-1" style="display: block !important"><div class="bsaProItems bsaGridNoGutter " style="background-color:"><div class="bsaProItem bsaReset" data-animation="fadeIn" style=""><div class="bsaProItemInner" style="background-color:"><div class="bsaProItemInner__thumb"><div class="bsaProAnimateThumb" style="display: block;margin: auto;"><a class="bsaProItem__url" href="https://www.audiotechnology.com/advertise?sid=86&bsa_pro_id=871&bsa_pro_url=1" target="_blank"><div class="bsaProItemInner__img" style="background-image: url(&#39;https://www.audiotechnology.com/wp-content/uploads/bsa-pro-upload/1700101434-Ableton_Live12_DA-pichi.jpg&#39;)"></div></a></div></div></div></div></div></div><script>
			(function($){
				function bsaProResize() {
					var sid = "86";
					var object = $(".bsaProContainer-" + sid);
					var imageThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__img");
					var animateThumb = $(".bsaProContainer-" + sid + " .bsaProAnimateThumb");
					var innerThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__thumb");
					var parentWidth = "970";
					var parentHeight = "450";
					var objectWidth = object.parent().outerWidth();
//					var objectWidth = object.width();
					if ( objectWidth <= parentWidth ) {
						var scale = objectWidth / parentWidth;
						if ( objectWidth > 0 && objectWidth !== 100 && scale > 0 ) {
							animateThumb.height(parentHeight * scale);
							innerThumb.height(parentHeight * scale);
							imageThumb.height(parentHeight * scale);
							object.height(parentHeight * scale);
						} else {
							animateThumb.height(parentHeight);
							innerThumb.height(parentHeight);
							imageThumb.height(parentHeight);
							object.height(parentHeight);
						}
					} else {
						animateThumb.height(parentHeight);
						innerThumb.height(parentHeight);
						imageThumb.height(parentHeight);
						object.height(parentHeight);
					}
				}
				$(document).ready(function(){
					bsaProResize();
					$(window).resize(function(){
						bsaProResize();
					});
				});
			})(jQuery);
		</script>						<script>
							(function ($) {
								var bsaProContainer = $('.bsaProContainer-86');
								var number_show_ads = "0";
								var number_hide_ads = "0";
								if ( number_show_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeIn(); }, number_show_ads * 1000);
								}
								if ( number_hide_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeOut(); }, number_hide_ads * 1000);
								}
							})(jQuery);
						</script>
						</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">Headphones have become fashionable, and therefore headphones have become status symbols. If I had a dollar for every time my Neumann NDH20s have been selfied by passers-by and diners at the night markets of South East Asia, I’d have enough to buy a Lamy clutch for every Yuccie arguing the difference between a journal and a diary.</span></p>
<p class="p1"><span class="s1">Headphones have democratised high fidelity; for a few hundred dollars you can get a pair of high-status headphones that shrug “hold my beer” when pitted against thousands of dollars worth of speakers with their obligatory acoustic treatments and tightly-defined ‘sweet spots’. Headphones allow you to sit, stand, lay or move about wherever you like because <i>you</i> are the sweet spot; the room’s acoustics don’t even know you’re there.</span></p>
<p class="p1"><span class="s1">Every major shopping mall has at least one store dedicated to mobile audiophilia and ‘head-fi’, i.e. high fidelity audio through headphones. If the stuff they sell seems expensive then you’d better re-assess your priorities because it’s chicken-feed compared to what it costs to get similar performance from speakers and their obligatory acoustic treatments, <i>and</i> you can take it with you anywhere.</span></p>
<p class="p1"><span class="s1">Most significantly, headphones have supplanted speakers for the vast majority of music purchasing decisions and <i>active</i> music consumption (i.e. listening with <i>intent</i> rather than plastering sonic wallpaper over the background noise). Simultaneously, there has been a resurgence of interest in binaural recording and the immersive possibilities it offers <em>without</em> requiring a room full of speakers and the hope that the playback system adheres to the same format adopted during recording and mixing. Microphone manufacturers like Sennheiser and DPA have added binaural microphone systems to their product lines, and Neumann’s dummy head (aka ‘Fritz’) has reached new levels of celebrity. Popular music artists are now inserting binaural elements into their multitrack recordings, taking the headphone listener by surprise with sounds and voices appearing from beyond the musical soundstage.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">As audio professionals, we would be foolish to underrate the significance of headphones in our mixing and monitoring decisions. Rather, we should be celebrating their popularity and exploiting their advantages. </span>It is little wonder that many of the younger generation of producers are doing most of their mixing work on headphones, and consider the big monitor speakers as being primarily for show. However, the key word in that phrase is &#8216;most&#8217;. Speakers bear little relevance to their world or their market, but they&#8217;re still cross-referencing on speakers, and they&#8217;ve got mastering engineers downstream ironing out the kinks while listening through big monitor speakers.</p>
<p class="p1"><span class="s1">What was once ridiculous is now mainstream. What is now ridiculous is placing <i>too much</i> significance on big monitor speakers and their obligatory acoustic treatments and inflexible sweet spots when mixing, because we’re living in a world where <i>most</i> people’s exposure to speaker reproduction is background music in cafés and shopping malls, platform announcements at train stations, and – topping the list – device notifications.</span></p>
<p class="p1"><span class="s1">Ding!</span></p>
<p class="p1"><span class="s1">I have hate mail…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-2456" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-2456 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >we would be foolish to underrate the significance of headphones</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-5188" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-5188 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"><h2 style="color: #2885f8;text-align: left;font-family:Source Sans Pro;font-weight:900;font-style:italic" class="vc_custom_heading wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft" ><a href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">Read on for differences between speaker mixing and headphone mixing…</a></h2></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div>
</section><p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-1">Mixing With Headphones 1</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.audiotechnology.com/tutorials/mixing-with-headphones-1/feed</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
			</item>
	</channel>
</rss>
