<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Greg Simmons, Author at AudioTechnology</title>
	<atom:link href="https://www.audiotechnology.com/author/gregsimmons/feed" rel="self" type="application/rss+xml" />
	<link>https://musoscorner.audiotechnology.com/author/gregsimmons</link>
	<description>Everything for the audio engineer, producer &#38; recording musician.</description>
	<lastBuildDate>Tue, 12 Dec 2023 04:29:43 +0000</lastBuildDate>
	<language>en-AU</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.3.2</generator>

 
	<item>
		<title>Mixing With Headphones 4</title>
		<link>https://www.audiotechnology.com/tutorials/mixing-with-headphones-4</link>
					<comments>https://www.audiotechnology.com/tutorials/mixing-with-headphones-4#respond</comments>
		
		<dc:creator><![CDATA[Greg Simmons]]></dc:creator>
		<pubDate>Wed, 29 Nov 2023 23:30:36 +0000</pubDate>
				<category><![CDATA[Issue 91]]></category>
		<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[greg simmons]]></category>
		<category><![CDATA[issue]]></category>
		<category><![CDATA[Mixing With Headphones]]></category>
		<category><![CDATA[Part 4]]></category>
		<category><![CDATA[tutorial]]></category>
		<guid isPermaLink="false">https://www.audiotechnology.com/?p=77249</guid>

					<description><![CDATA[<p> [...]</p>
<p><a class="btn btn-secondary understrap-read-more-link" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-4">Read More...</a></p>
<p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-4">Mixing With Headphones 4</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element  drop-cap" >
		<div class="wpb_wrapper">
			<p><span class="s1">In the <strong><span style="color: #333399;"><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">previous instalment</a></span></strong> we discussed useful tools for mixing with headphones, with a focus on identifying and replicating the problems that only occur when mixing with speakers. Why replicate those problems? Because compensating for them ultimately gives our speaker mixes more <i>resilience</i> (i.e. they translate better through different playback systems), and we want to build that same resilience into our headphone mixes.</span></p>
<p class="p3"><span class="s1">We also exposed the oft-repeated ‘just trust your ears’ advice for the flexing nonsense it is. Any question that triggers this unhelpful response is obviously coming from someone who cannot or does not know how to ‘trust their ears’, either through inexperience or lack of facilities. Brushing their question off with ‘just trust your ears’ is pro-level masturbation at its best. The ‘trust your ears’ advice is <i>especially</i> invalid when mixing with headphones. In that situation we cannot ‘trust our ears’ because, as we’ve established in previous instalments, headphones don’t give our ears all of the information needed to build resilience into our headphone mixes.</span></p>
<p class="p3">

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="674" height="585" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/01-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="01-pichi" fetchpriority="high" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/01-pichi.jpg 674w, https://www.audiotechnology.com/wp-content/uploads/2023/11/01-pichi-600x521.jpg 600w" sizes="(max-width: 674px) 100vw, 674px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">The previous instalment ended with a ‘Mixing With Headphones’ session template, set up and ready for mixing. In this instalment we’ll start putting that template into practice using the EQ tools it contains; in the fifth instalment we’ll look at dynamic processing (compression and limiting), and in the sixth and final instalment we’ll look at spatial processing (reverberation, delays, etc.). But first a word about ‘visceral impact’ as defined in the <strong><span style="color: #333399;"><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">second instalment</a></span></strong> of this series, and some basic mixing rules to get your mix started…</span></p>
<h4 class="p3"><strong><span class="s1">Visceral Elusion</span></strong></h4>
<p class="p3"><span class="s1">We know that with headphone monitoring/mixing there is no room acoustic, no interaural crosstalk, and no <i>visceral impact</i> to add an enhanced (and perhaps exaggerated) sense of excitement. In the <a href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2"><span style="color: #333399;"><strong>second</strong></span></a> and <strong><span style="color: #333399;"><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">third</a></span></strong> instalments of this series we discussed ways of working around the frequency response and spatial issues, but the lack of visceral impact is a trap we need to be constantly aware of – especially when first transitioning from speaker mixing to headphone mixing.</span></p>
<p class="p3"><span class="s1">A good pair of headphones can effortlessly reproduce the accurate and extended low frequency response that acousticians and studio owners dream of achieving with big monitors installed in expertly designed rooms and costing vast sums of money. However, when mixing with headphones we have to remember that those low frequencies are being reproduced directly into our ears via acoustic pressure coupling – which means we do not experience them <i>viscerally</i> (i.e. we do not feel them with our internal organs aka our <em>viscera</em>) as we do when listening through big monitors. There is no <i>visceral impact</i>, which means we must be very careful about how much low frequency energy we put into our mixes. Increasing the low frequencies until we can <i>feel</i> them is not a good idea when mixing with headphones…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="674" height="588" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/25-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="25-pichi" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/25-pichi.jpg 674w, https://www.audiotechnology.com/wp-content/uploads/2023/11/25-pichi-600x523.jpg 600w" sizes="(max-width: 674px) 100vw, 674px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">This is where the musical reference track and the spectrum analyser in our ‘Mixing With Headphones’ template are particularly valuable. If the low frequencies heard in our mix are pumping 10 times harder than the low frequencies heard in our reference track and seen on the spectrum analyser, they’re probably not right. It might be tempting to succumb to premature congratulation and declare that our mix is better than the reference because it pumps harder, but it is <i>almost certainly wrong</i>. That’s the point of using a carefully chosen reference track that represents the sonic aesthetic we’re aiming for: if our mix strays too far from the reference in terms of balance, tonality and spatiality then it is probably wrong and we need to rein it in before it costs us more time and/or money in re-mixing and mastering.</span></p>
<p class="p3"><span class="s1">How do we avoid such problems when mixing with headphones? Read on…</span></p>
<h4 class="p3"><strong><span class="s1">BASIC RULES FOR HEADPHONE MIXING</span></strong></h4>
<p class="p3"><span class="s1">The following is a methodical approach to mixing with headphones based on prioritising each sound’s role within the mix, introducing the individual sounds to the mix in order of priority, and routinely checking the affect that each newly introduced sound is having on the evolving mix by using the tools described earlier: mono switch, goniometer, spectrum analyser with 6dB guide, and a small pair of desktop monitors. This methodical approach allows us to catch problems as they occur, before they’re built into our mix and are harder to undo. The intention is to create a mix that, tonally at least, should only require five minutes of mastering to be considered sonically <i>acceptable</i>.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">Note that the methodical approach described here is not suitable for a mix that needs to be pulled together in a hurry; for example, mixing a live gig that has to start without a soundcheck or rehearsal, or dealing with advertising agency clients</span><span class="s1"> who don’t understand why a 30 second jingle takes more than 30 seconds to record and mix. In those situations ‘massage mixing’ is more appropriate, i.e. pushing all the faders up to about -3dB, getting the mix together roughly with faders and panning, focus on keeping the most important sounds in the mix clearly audible, and continue refining the mix with each pass until the gig is finished or the session time runs out. In these situations, Michael Stavrou’s sculpting analogy [as explained in his book ‘Mixing With Your Mind’] is very applicable when he advises us to “start rough and work smooth”. Get the basic shape of the mix in place before smoothing out the little details, because nobody cares about the perfectly polished snare sound if they can’t hear the vocal.</span></p>
<h4 class="p3"><strong><span class="s1">Establishing The Foundation</span></strong></h4>
<p class="p3"><span class="s1">For the strategic and methodical approach described here, start by establishing the <em>foundation sounds</em> that the mix must be built around. For most forms of popular music those foundation sounds are the kick, the snare, the bass and the vocal. Each of the foundation sounds should have what Sherman Keene [author of ‘Practical Techniques for the Recording Engineer’] refers to as ‘equal authority’ in the mix – meaning each foundation sound should have the appropriate ‘impact’ on the listener when we switch between them one at a time, and they should work together as a cohesive musical whole rather than one sound dominating the others. A <em>solid stomp</em> on the kick pedal should hit us with the same impact as a <em>solid hit</em> on the snare, a <em>firm pluck</em> of the bass guitar, and a <em>full-chested line</em> from the vocalist. Those moments should <i>feel</i> like they hit us with the same impact, and they should <em>feel</em> like they belong together in the same performance. That <i>feeling</i> is harder to sense without the visceral impact of speakers, but with a little practice and cross-referencing against our reference track we can get there.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-3951" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-3951 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >This methodical approach allows us to catch problems as they occur…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-8216" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-8216 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="917" height="645" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/02-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="02-pichi" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/02-pichi.jpg 917w, https://www.audiotechnology.com/wp-content/uploads/2023/11/02-pichi-800x563.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/02-pichi-768x540.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/02-pichi-600x422.jpg 600w" sizes="(max-width: 917px) 100vw, 917px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">Start with the most important sound in the foundation, and get that right to begin with. For most forms of popular music that will be the vocal, so start the mix with the vocal <i>only</i> and get it sounding as good as possible <i>on its own</i>, where it is not competing with any other sound sources. You may need to add one or two other tracks to the monitoring – one for timing and one for tuning – to provide a musical context for editing and autotuning but don’t spend any time on those tracks yet. Focus on getting the vocal’s EQ and compression appropriate for the performance and the genre. Aim to create a vocal track that can carry the song on its own without <i>any</i> musical backing. Create different effects and processing for different parts of the vocal performance to suit different moods or moments within the music – for example, changing reverberation and delay times between verses and choruses, using delays or echoes to repeat catch lines or hooks, and similar. Use basic automation to <i>orchestrate</i> those effects, bringing them in and out of the mix when required as shown below. Note that placing the mutes <em>before</em> the effects simplifies timing the mute automation moves and also allows each effect (delay, reverb, etc.) to play itself out appropriately instead ending abruptly halfway through – the classic rookie error.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="788" height="393" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/26-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="26-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/26-pichi.jpg 788w, https://www.audiotechnology.com/wp-content/uploads/2023/11/26-pichi-768x383.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/26-pichi-600x299.jpg 600w" sizes="(max-width: 788px) 100vw, 788px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">Once we have the vocal (or whatever is the most important sound in the mix) ready, check it in mono, check it on the goniometer and check it on the desktop monitors to make sure that the stereo effects and processors are behaving themselves. Cross reference it with the reference track in the Mixing With Headphones template to make sure it is sounding appropriate for the genre.</span></p>
<p class="p3"><span class="s1">Introduce the other foundation sounds one at a time; in this example they will be the kick, the snare and the bass. Use EQ, compression and spatial effects (reverberation, delay, etc.) to get each of these sounds working together in the same tonal perspective and dynamic perspective as the vocal, and in the desired spatial perspective against each other and against the vocal. Toggle each plug-in and effect off and on repeatedly to make sure it is making a positive difference. If not, fix it or remove it because processors that are not making a positive difference are like vloggers at a car crash: they’re ultimately part of the problem. </span><span class="s1">Check each foundation sound and its processing in mono, check it on the goniometer and check it on the desktop monitors to make sure that its stereo effects and processors are behaving themselves.</span></p>
<p class="p3"><span class="s1">With all of the foundation sounds in place we may need to tweak the levels of any spatial effects on the vocal that have become perceived differently after introducing the other foundation sounds.</span></p>
<p class="p3"><span class="s1">Orchestrate the effects for the foundation sounds (as described above for the vocal) to help each sound stand out when it’s supposed to stand out and stand back when it’s supposed to stand back, thereby enhancing its ability to serve the music.</span></p>
<p class="p3"><span class="s1">Always consider the impact each newly-introduced sound is having on the clarity and intelligibility of the existing sounds in the mix and, particularly, its impact on the most important sound in the mix – which in this example is the vocal. We should not modify the vocal to compete with the other sounds, rather, we should modify the other sounds to fit around or alongside the vocal. After all, in this example the vocal is the most important sound in the mix <i>and</i> we had it sounding right on its own to begin with. If adding another sound to the mix affects the sound of the vocal (or whatever the most important sound is), we need to make changes to the level, tonality and spatiality of the added sound. That is why we prioritised the sounds to begin with: to make sure the most important sounds have the least tonal, dynamic and spatial compromises, and therefore have the most room to move and feature in the mix.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">Note that the goal here is to fit the other sounds <i>around</i> and <i>alongside</i> the vocal, not simply <i>under</i> the vocal. Putting foundation sounds <i>under</i> the vocal is the first step towards creating a <i>karaoke mix</i> or a <i>layer cake mix;</i> more about those in the last instalment of this series…</span></p>
<p class="p3"><span class="s1">After introducing each new foundation sound to the mix be sure to check it in mono, check it on the goniometer (as described in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">previous instalment</a></strong></span>) and check it on the desktop monitors to make sure it is not misbehaving in ways we cannot identify in headphones but will become apparent if heard through speakers.</span></p>
<p class="p3"><span class="s1">With the foundation mix done, save a copy that can be returned to in case things spiral out of control. Thank me later when/if that happens…</span></p>
<h4 class="p3"><strong><span class="s1">Beyond The Foundation</span></strong></h4>
<p class="p3"><span class="s1">Introduce the other sounds one at a time, weaving each of them <i>among</i> and <i>around</i> the foundation sounds while ensuring <i>all</i> sounds remain in the desired <i>tonal perspective</i>, <i>dynamic perspective</i> and <i>spatial perspective</i> with each other. The following text describes strategies for achieving <em>tonal perspective</em>; strategies for achieving <em>dynamic perspective</em> and <em>spatial perspective</em> are discussed in the forthcoming instalments.</span></p>
<p class="p3"><span class="s1">Every new sound introduced to the mix has the potential to change our perception of the existing sounds in the mix, so check for this and process accordingly without messing with the foundation sounds. Pay careful attention to how each new sound impacts the audibility of spatial effects (reverbs, delays, etc.) that have been applied to existing sounds, and adjust as necessary.</span></p>
<h4><strong>Loud Enough vs Clear Enough</strong></h4>
<p class="p3"><span class="s1">When balancing sounds together in the mix, always be aware of the difference between “not loud enough” and “not clear enough”. Novice engineers assume that if they cannot hear something properly it is <i>not loud enough</i> and will therefore reach for the fader. More experienced sound engineers know that often the sound described as “not loud enough” is in fact <em>loud enough</em> but is not <i>clear enough</i> due to some other issue with how it fits into the mix (e.g. its tonality, its dynamics or its spatial properties). And in some cases we realise that the sound deemed <i>not loud enough</i> is actually being buried or <em>masked</em> by another sound that is <i>too loud</i> in the mix and needs to be fixed.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-5520" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-5520 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >…fit the other sounds around and alongside the vocal, not simply under it…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-6657" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-6657 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="674" height="588" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/03-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="03-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/03-pichi.jpg 674w, https://www.audiotechnology.com/wp-content/uploads/2023/11/03-pichi-600x523.jpg 600w" sizes="(max-width: 674px) 100vw, 674px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">Bring each sound up to the level it <i>feels</i> like it is supposed to be at from a performance point of view, regardless of how clear it is. We can determine if its level is right by soloing it against other sounds that are meant to have similar authority in the mix. If the sound is at the right performance level when solo’d against sounds of similar authority but is hard to hear properly in the mix, the problem is not the sound’s fader level but rather its <em>clarity</em> and/or <em>separation</em> within the mix.</span></p>
<p class="p3"><span class="s1">In most cases this means the sound’s <em>overall</em> level is correct but <em>some parts of its frequency spectrum</em> are either not loud enough or are too loud, and we need to use EQ to boost or cut <em>just those parts</em> of the sound’s frequency spectrum to make it clear enough and bring it into the correct <i>tonal perspective</i> for the mix. We’ll discuss this process later in this instalment.</span></p>
<p class="p3"><span class="s1">If it is hard to find the right level for a sound that gets too loud at some times and too soft at other times, it suggests that sound is probably not in the same <em>dynamic perspective</em> as the other sounds and will require careful compression to rein it in. (See ‘Dynamic Processing’ in the next instalment.) </span>Sometimes a sound is in the correct <em>tonal perspective</em> and <em>dynamic perspective</em> for the mix but gets easily lost behind the other sounds, or continually dominates them, due to having an incorrect <em>spatial perspective</em> (e.g. too much reverb). We use spatial processing to create, increase or decrease the sound<span class="s1">’</span>s spatial properties and thereby assist with separation (See <span class="s1">‘</span>Spatial Processing<span class="s1">’</span> in the sixth instalment of this series.)</p>
<p class="p3"><span class="s1">It’s also possible that the problem is unsolvable at the mixing level due to ridiculous compositional ideas that have since become audio engineering problems. </span><span class="s1">For the remainder of this series let’s remove that variable by assuming we’re working with professional composers who know how to build musical clarity and separation into their compositions.</span></p>
<p class="p3"><span class="s1">To solve these ‘loud enough but not clear enough’ problems we use tonal processing to adjust the balance of individual frequencies within a sound, dynamic processing to solve problems with sounds that alternate between too loud and too soft, and spatial effects to provide separation from competing sounds. Let’s start with tonal processing, or, as it is generally referred to, ‘EQ’ and ‘filtering’…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1595296124081 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner vc_custom_1595990674300"><div class="wpb_wrapper"><div id="bsa-block-970--450" class="bsaProContainerNew bsaProContainer-86 bsa-block-970--450 bsa-pro-col-1" style="display: block !important"><div class="bsaProItems bsaGridNoGutter " style="background-color:"><div class="bsaProItem bsaReset" data-animation="fadeIn" style=""><div class="bsaProItemInner" style="background-color:"><div class="bsaProItemInner__thumb"><div class="bsaProAnimateThumb" style="display: block;margin: auto;"><a class="bsaProItem__url" href="https://www.audiotechnology.com/advertise?sid=86&bsa_pro_id=845&bsa_pro_url=1" target="_blank"><div class="bsaProItemInner__img" style="background-image: url(&#39;https://www.audiotechnology.com/wp-content/uploads/bsa-pro-upload/1697064227-Meyer_Panther_DA-pichi.jpg&#39;)"></div></a></div></div></div></div></div></div><script>
			(function($){
				function bsaProResize() {
					var sid = "86";
					var object = $(".bsaProContainer-" + sid);
					var imageThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__img");
					var animateThumb = $(".bsaProContainer-" + sid + " .bsaProAnimateThumb");
					var innerThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__thumb");
					var parentWidth = "970";
					var parentHeight = "450";
					var objectWidth = object.parent().outerWidth();
//					var objectWidth = object.width();
					if ( objectWidth <= parentWidth ) {
						var scale = objectWidth / parentWidth;
						if ( objectWidth > 0 && objectWidth !== 100 && scale > 0 ) {
							animateThumb.height(parentHeight * scale);
							innerThumb.height(parentHeight * scale);
							imageThumb.height(parentHeight * scale);
							object.height(parentHeight * scale);
						} else {
							animateThumb.height(parentHeight);
							innerThumb.height(parentHeight);
							imageThumb.height(parentHeight);
							object.height(parentHeight);
						}
					} else {
						animateThumb.height(parentHeight);
						innerThumb.height(parentHeight);
						imageThumb.height(parentHeight);
						object.height(parentHeight);
					}
				}
				$(document).ready(function(){
					bsaProResize();
					$(window).resize(function(){
						bsaProResize();
					});
				});
			})(jQuery);
		</script>						<script>
							(function ($) {
								var bsaProContainer = $('.bsaProContainer-86');
								var number_show_ads = "0";
								var number_hide_ads = "0";
								if ( number_show_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeIn(); }, number_show_ads * 1000);
								}
								if ( number_hide_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeOut(); }, number_hide_ads * 1000);
								}
							})(jQuery);
						</script>
						</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><strong><span class="s1">EQUALISATION &amp; FILTERING</span></strong></h4>
<p class="p3"><span class="s1">The use of equalisation and filtering serves three purposes in a mix: <em>correcting</em> sounds, <em>enhancing</em> sounds, and <em>integrating</em> sounds. In our ‘Mixing With Headphones’ template described in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">previous instalment</a></strong></span> we added three EQ plug-ins to each channel strip. Here’s what they’re for…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="487" height="653" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/04-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="04-pichi" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><strong><span class="s1">Corrective EQ</span></strong></h4>
<p class="p3"><span class="s1">This is used to fix fundamental problems in individual sounds and clean them up before putting them in the mix, which means we should choose a clean EQ plug-in that is not designed to impart any tonality or character of its own into the sound. The emphasis here is to use something <i>capable</i> rather than <i>euphonic</i>. A six-band fully parametric EQ with at least ±12dB of boost/cut, along with high and low pass filtering and the option to switch the lowest and highest bands to shelving, is a good choice.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="816" height="653" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/05-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="05-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/05-pichi.jpg 816w, https://www.audiotechnology.com/wp-content/uploads/2023/11/05-pichi-800x640.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/05-pichi-768x615.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/05-pichi-600x480.jpg 600w" sizes="(max-width: 816px) 100vw, 816px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">Corrective EQ is used to make an excessively dull sound brighter or to make an excessively bright sound duller, to fix sounds that have too much or too little midrange, and to fix sounds that have too much or too little low frequency energy. It is also used to remove or reduce the audibility of any unwanted elements within the sound such as low frequency rumble (high pass filter aka low cut filter, low frequency shelving cuts), hiss and noise (low pass filter aka high cut filter, high frequency shelving cuts), and unwanted ringing and resonances (notch filters, dips).</span></p>
<p class="p3"><span class="s1">The goal of corrective EQ is to create <em>objectively good</em> sounds. What is an <em>objectively good</em> sound? It’s a sound that does not contain any <em>objectively bad</em> sounds, of course. </span><span class="s1">It is hard to define what sounds are objectively ‘good’, but it’s easy to define what sounds are objectively ‘bad’.</span></p>
<p class="p3"><span class="s1">Objectively ‘bad’ sounds are resonances and rings, low frequency booms and rumbles, unwanted performance noises and sounds, hiss and noise, and similar unmusical and/or distracting elements that don’t belong in the sound <i>as we intend to use it</i>.</span></p>
<p>One of the most common applications of corrective EQ is removing unwanted low frequency energy. Most sounds contain unwanted low frequency energy <em>below</em> the fundamental frequency of the lowest musical note in the performance. It may not seem like much on any individual track but the unwanted low frequency content on each track accumulates throughout the mix, with two results. Firstly, it reduces the impact and clarity of kick drums, bass lines, low frequency drones and other sounds that are legitimately occupying that part of the frequency spectrum. Secondly, most monitoring systems are not capable of reproducing this unwanted low frequency information reliably (particularly below 70Hz), and forcing them to reproduce it affects their ability to reproduce other frequencies that are within their range – which thereby affects their ability to reproduce the mix. It<span class="s1">’</span>s like forcing one horse to pull a cart that requires two horses.</p>
<p>Strategically removing unwanted low frequency information from individual sounds brings clarity and definition to our mixes while also allowing a broader range of monitoring systems to reproduce our mixes properly. With these benefits in mind, it is always worthwhile starting any EQ process by viewing the sound on the spectrum analyser (built into the Mixing With Headphones template) and looking for activity in the very low frequencies that has no musical value. This will be low frequency activity that remains visible, whether audible or not, and can be seen bobbing up and down at the far left side of the spectrum analyser regardless of what musical parts are being played. Removing or reducing this unwanted low frequency information with a carefully-tuned high pass filter or low frequency shelving EQ (in either case pay attention to the cut-off frequency and the slope) will clean up the individual sounds <em>and</em> the mix considerably.</p>
<p class="p3"><span class="s1">We use corrective EQ to remove or significantly reduce the audibility of the objectively ‘bad’ parts of the sound, thereby leaving us with only the objectively ‘good’ parts of the sound for <i>enhancing</i> and <i>integrating</i> into our mix. As always, after applying corrective EQ we should check the results against the original sound to make sure we have made an <em>improvement</em> and not just a difference.</span></p>
<h4 class="p3"><strong><span class="s1">Enhancing EQ</span></strong></h4>
<p class="p3"><span class="s1">This is used to create <i>subjectively</i> ‘good’ sounds from the <i>objectively</i> ‘good’ sounds we made with corrective EQ as described above. What are <em>subjectively</em> ‘good’ sounds? They are sounds that contain no <em>objectively</em> bad sounds (which we removed with corrective EQ), <em>and</em> are good to listen to <em>while also</em> bringing musical value or feeling to the mix. </span><span class="s1">We can do whatever we like with the <em>objectively</em> ‘good’ sounds to turn them into <em>subjectively</em> ‘good’ sounds, as long as we don’t inadvertently re-introduce the <em>objectively</em> ‘bad’ sounds we removed with the corrective EQ.</span></p>
<p class="p3"><span class="s1">For this enhancing purpose we can use an EQ plug-in with character to introduce some euphonics into the sound. This could be a software model of a vintage tube EQ that imparts a warm or musical tonality, and/or something with unique tone shaping curves like the early Pultecs, and/or gentle Baxandall curves for high and low frequency shelving. Unlike the <em>corrective EQ</em>, the <em>enhancing EQ</em> doesn’</span><span class="s1">t need corrective capabilities (a lot of vintage EQs did not have comprehensive features), and w</span><span class="s1">e can make up for any shortcomings here by using the <em>corrective EQ</em> and the <em>integrating EQ.</em></span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="816" height="653" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/06-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="06-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/06-pichi.jpg 816w, https://www.audiotechnology.com/wp-content/uploads/2023/11/06-pichi-800x640.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/06-pichi-768x615.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/06-pichi-600x480.jpg 600w" sizes="(max-width: 816px) 100vw, 816px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">The <em>enhancing EQ</em> is where the creative aspect of mixing begins: crafting a collection of sounds that might be individually desirable but, more importantly, collectively help to serve the meaning, message or feeling of the music. One of the goals here is to bring out the musical character of each individual sound while giving it the desired amount of clarity so we can hear ‘into’ the sound and appreciate all of its harmonics and overtones, along with the expression and performance noises that help to bring meaning to the mood of the music. In other words, to <em>enhance</em> its musicality.</span></p>
<p class="p3"><span class="s1">When applying enhancing EQ try to use frequencies that are musically and/or harmonically related to the music itself. Most Western music is based around the A440 tuning reference of 440Hz, so that forms a good point of reference. </span><span class="s1">The table below shows the frequencies of the notes used for Western music based on the tuning reference of A440, from C<span class="s3"><sub>0</sub></span> to B<span class="s3"><sub>8</sub></span>. The decimal fraction part of each frequency has been greyed out for clarity and also because we don’t need <em>that much</em> precision when tuning an enhancing EQ. Integer values are accurate enough&#8230;</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="969" height="634" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/07-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="07-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/07-pichi.jpg 969w, https://www.audiotechnology.com/wp-content/uploads/2023/11/07-pichi-800x523.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/07-pichi-768x502.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/07-pichi-600x393.jpg 600w" sizes="(max-width: 969px) 100vw, 969px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">If we jot down the frequencies of the notes that exist within the scale(s) of the piece of music we’re mixing, we can lean into those frequencies when fine-tuning our enhancing EQ. For example, let’s say we had an enhancing EQ that provided a small boost at 850Hz. It’s using a frequency that does not exist in any Western musical scale that is based on the A440 tuning reference; 850Hz sits in between G</span><span class="s3"><sup>#</sup><sub>5</sub></span><span class="s1"> (830.61Hz) and A</span><span class="s3"><sub>5</sub></span><span class="s1"> (880Hz), and is therefore not a particularly musical choice. Nudging that enhancing boost <em>down</em> towards 830Hz (G<span class="s3"><sup>#</sup><sub>5)</sub></span> or <em>up</em> towards 880Hz (A</span><span class="s3"><sub>5</sub></span><span class="s1">) </span><span class="s1">will <i>probably</i> sound more musical and is, therefore, definitely worth trying.</span></p>
<p class="p3"><span class="s1">We should always nudge our enhancing EQ boosts towards frequencies that <em>do</em> exist within the scale(s) of the music we’re mixing – we wouldn’t let a musician play out of tune, so why let an enhancing EQ boost be out of tune? Likewise, we should always </span><span class="s1">nudge our enhancing EQ dips towards frequencies that <em>don’t</em> exist within the scale(s) of the music we’re mixing – if we’re going to dip some frequencies out of a sound, try to focus on frequencies that aren’t contributing any musical value. Less <em>non-musicality</em> means more <em>musicality</em>, right?</span></p>
<p>It’s also worth noting that when a sound responds particularly well to a boost or a cut at a certain frequency (let’s call that frequency <em>f</em>), it will probably also respond well to a boost or a cut an octave higher (<em>f</em> x 2) and/or an octave lower (<em>f</em> / 2). More about that shortly…</p>
<h4 class="p3"><strong><span class="s1">Integrating EQ</span></strong></h4>
<p>While <em>corrective EQ</em> and <em>enhancing EQ</em> are used for cleaning up and creating sounds, <em>integrating EQ</em> is used for combining sounds together, i.e. integrating them into a mix.</p>
<p class="p3"><span class="s1">Creating good musical sounds with <em>enhancing EQ</em> is fun and satisfying, and might even be inspiring, but we must be constantly aware of how those individual sounds will interact when combined together in the mix. It’s common to have, for example, a piano and a strummed acoustic guitar that sound great individually but create <em>sonic mud</em> when mixed together because there is <em>too much overlapping harmonic similarity</em> between them. They are both using vibrating strings to create their sounds and therefore both have the same harmonic series, which makes it harder for the ear/brain system to differentiate between them if they’re </span><span class="s1">playing similar notes and chords.</span></p>
<p class="p3"><span class="s1">Another form of <em>sonic mud</em> occurs when composers create music using sounds from different sample libraries and ‘fader mix’ them together. Because each individual sample sounds great in isolation, the assumption is that simply fader mixing them together will sound even greater. That is like pouring a dozen of our favourite colour paints into a bucket and giving it a stir on the assumption it will create our ‘ultimate’ favourite colour. What do we get? A swirling grey mess, <i>every single time</i>, and it’s the same when mixing a collection of individually enhanced sounds.</span></p>
<p class="p3"><span class="s1">That’s what <i>integrating EQ</i> is for: helping us to integrate – or ‘fit’ – the individually enhanced EQ sounds together into a mix or soundscape, ensuring they all work together while remaining clear and audible. As with our choice of <em>corrective EQ</em>, the <em>integrating EQ</em> should be a clean plug-in that does not impart any tonality or character of its own. A six-band fully parametric EQ with high and low pass filtering and the option to switch the lowest and highest bands to shelving is a good choice here.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="816" height="653" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/08-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="08-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/08-pichi.jpg 816w, https://www.audiotechnology.com/wp-content/uploads/2023/11/08-pichi-800x640.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/08-pichi-768x615.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/08-pichi-600x480.jpg 600w" sizes="(max-width: 816px) 100vw, 816px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">We use <em>integrating EQ</em> to <i>maintain</i> clarity and tonal separation within a mix. We listen to how our <em>enhanced EQ</em> sounds affect each other when introduced to the mix, and we make appropriate tweaks with <em>integrating EQ</em> to fix any conflicts and restore the preferred elements of each sound. How?</span></p>
<h4><strong>INTEGRATING EQ EXAMPLE</strong></h4>
<p class="p3"><span class="s1">Let’s go back to the earlier example of the piano and the strummed acoustic guitar, where each instrument sounded good on its own but both instruments lost clarity and tonal separation when mixed together. </span><span class="s1">Imagine the piano and the acoustic guitar have been loaded into our Mixing With Headphones template. </span><span class="s1">Using the individual channel solo buttons along with the spectrum analyser on the mix bus allows us to examine the frequency spectrums of the piano and the acoustic guitar individually. Conflicts between their frequency spectrums can be identified by temporarily adjusting both sounds to the same perceived loudness,</span><span class="s1"> then alternating between soloing each sound individually and soloing both simultaneously.</span></p>
<p class="p3"><span class="s1">For this example let’s say that, due to our clever use of <em>corrective EQ</em> and <em>enhancing EQ</em>, both sounds are full-bodied and rich but therein lies the first problem: </span><span class="s1">they’re both competing for our attention in the midrange. </span><span class="s1">That means we have to apply <em>integrating EQ</em> with the goal of making them <em>work together</em> in the midrange.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="261" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/09-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="09-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/09-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/09-pichi-600x218.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">We start by prioritising each competing sound based on its musical and/or textural role in the mix. </span><span class="s1">We want to use minimal <em>integrating EQ</em> on foundation sounds and featured sounds that play musically significant parts, preserving the musicality and tonality that we’ve already highlighted in those sounds with the <em>enhancing EQ</em>. Textural sounds and background sounds are more forgiving of tonal changes so it is smarter to apply any significant <em>integrating EQ</em> changes to those sounds. </span><span class="s1">Let’s examine the roles of the acoustic guitar and the piano in this particular piece of music to prioritise them accordingly.</span></p>
<p class="p3"><span class="s1">Although the rhythm and playing of the acoustic guitar is helping the drums and bass guitar to propel the music forward, it is never actually featured in the mix of this piece of music. Therefore its primary purpose is <em>textural</em>; it provides a gap-filling layer of musical texture in the background. </span><span class="s1">We can use a lot of <em>integrating EQ</em> here if we have to, as long as it doesn’t interfere with the acoustic guitar’s <em>textural</em> role.</span></p>
<p class="p3"><span class="s1">What about the piano? In this piece of music, the left hand is playing a <em>textural</em> role with gentle low chords that complement the bass guitar and thicken the acoustic guitar. The right hand, however, is playing a <em>musically significant role</em> by adding sharply punctuating chords along with short melodies that fill the spaces between vocal lines, and those melodies often conflict with the acoustic guitar. </span><span class="s1">These observations tell us that we can manipulate the piano’s lower frequencies (left hand, textural) as required to make it work in ensemble with the bass guitar and the guitar, but we need to be very conservative with any EQ applied to the midrange (right hand, musically significant) to avoid altering the tonality of the punctuating chords and short melodies.</span></p>
<p><span class="s1">Having established that the acoustic guitar’s tonality has a lower priority than the piano’s tonality in this piece of music, it is an appropriate starting place for applying integrating EQ.</span></p>
<p><span class="s1">Let’s make a clarifying dip in the acoustic guitar’s spectrum, right where the two sounds share overlapping peaks in their spectrums – which is almost certainly the cause of the problem.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="235" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/10-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="10-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/10-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/10-pichi-600x197.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">This upper midrange dip will change the tonality of the acoustic guitar, of course. Fortunately, in this case it will make it more subdued and appropriate for the background textural role it plays in the mix. More importantly, however, it will contribute to the overall clarity of the mix by creating room for the piano without altering the piano sound itself.</span></p>
<p><span class="s1">To fine-tune the depth of the dip (i.e. how many dB to cut) and the width of its bandwidth (Q), we should switch the applied EQ in and out while soloing the guitar and piano separately and together and also checking the results on the spectrum analyser. We want to dip just enough out of the acoustic guitar to leave room for the right hand parts of  the piano to be heard clearly, but no more.</span></p>
<p><span class="s1">We might find that the required depth and bandwidth of the dip has improved the clarity of the piano within the mix, but the acoustic guitar has become less interesting. </span><span class="s1">We can musically compensate for this change in the acoustic guitar’s tonality by adding small boosts an octave above and below the dipped frequency in the acoustic guitar’s spectrum as shown below.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="235" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/11-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="11-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/11-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/11-pichi-600x197.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Again, switching the EQ on and off while soloing the instruments individually and together while watching the spectrum analyser will help us get the settings just right. [The <em>compensating EQ</em> technique shown above can be applied whenever a sound has been given a necessary integrating EQ peak or dip that subtracts some of that sound<span class="s1">’</span>s musicality: add small boosts an octave either side of any significant dips, and small cuts an octave either side of any significant boosts.]</p>
<p>The process of applying integrating EQ and compensating EQ to the acoustic guitar might reveal other areas worth working on. For example, let’s say this ‘soloing with spectrum analysis’ process revealed some upper harmonics in the piano sound that were worth bringing out. Applying a small <em>integrating EQ</em> dip in the acoustic guitar’s spectrum will create room for those upper harmonics of the piano to shine through, and applying a small <em>compensating EQ</em> boost in the acoustic guitar<span class="s1">’</span>s spectrum an octave higher will do the same for the acoustic guitar’s upper harmonics.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="235" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/12-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="12-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/12-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/12-pichi-600x197.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">All of the integrating and compensating EQ changes detailed above have improved the clarity of, and separation between, the acoustic guitar and the <em>right hand parts</em> of the piano by focusing on their respective textural and musical roles. We’ve already established that the <em>left hand parts</em> of the piano were playing a textural role, as is the acoustic guitar, so let’s see how they sit alongside one of the mix’s foundation sounds that shares some of the same spaces within the frequency spectrum: the bass guitar.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="234" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/27-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="27-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/27-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/27-pichi-600x196.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Adding the bass to the piano and acoustic guitar, and switching/solo’ing between them, shows that there is some worthwhile upper harmonic detail in the bass sound that fits nicely into a natural dip in the piano’s spectrum but is being masked by one of the complementary EQ boosts we added previously to the acoustic guitar. Because the bass is a foundation sound that we’ve already got sounding right within the foundation mix, we want to avoid altering it if possible because it is one of the internal references for our mix. Rather than boosting the upper harmonics of the bass, we’ll reduce the complementary boost added previously to the acoustic guitar just enough to allow those upper harmonics of the bass to be audible again – as shown below.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="234" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/13-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="13-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/13-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/13-pichi-600x196.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>This process has also revealed that between the bass, the left hand of the piano and the acoustic guitar there is more low frequency <em>bloom</em> in the mix than we’d like. It’s not necessarily boomy or wrong, but it is bordering on sounding <em>bloated</em> and <em>muddy</em> in the low frequencies – especially when compared to the low frequencies in the reference track we added to the Mixing With Headphones template before starting this mix. We don’t want to change the low frequencies in the bass because we got them right when establishing the foundation mix, and we know that any alterations to the foundation mix are likely to result in a ripple of changes throughout the mix. In this example, reducing the low frequencies of the bass to minimise the risk of the mix sounding ‘bloated’ will make the kick sound as if it has too much low frequency energy <em>or</em> is generally too loud. This will lead us to make changes to the kick, and the domino effect will topple through the mix from there.</p>
<p>Because we’ve been working with the acoustic guitar so far, we’ll start there by adding a subtle low frequency shelf or a gentle high pass filter to pull down its low frequencies just enough to clarify what is happening between the bass guitar and the left hand of the piano parts.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="234" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/14-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="14-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/14-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/14-pichi-600x196.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>From here we can see the left hand parts of the piano sound, which are playing a textural role, are the remaining cause of the excessive bloom. We can wind them back with some low frequency shelving or a gentle high pass filter on the piano’s <em>integrating EQ</em>.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="234" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/15-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="15-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/15-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/15-pichi-600x196.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>While we’re working on the piano, let’s bring out those upper harmonics we revealed earlier by adding a small boost in the piano in the same area we previously made a small dip in the acoustic guitar.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="234" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/16-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="16-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/16-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/16-pichi-600x196.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>All of the <i>integrating EQ</i> and <em>compensating EQ</em> changes detailed above have resulted in the bass, piano and acoustic guitar sitting together clearly and musically in the mix, and yet most of that improvement was achieved by making changes to the lowest priority sound of the three: the acoustic guitar. Its entirely textural role in the mix made it the most sensible choice to make <em>integrating EQ</em> changes to. Two very subtle changes were made to the piano to improve its placement in the mix, and no changes were made to the bass guitar – which is in keeping with our goal of using the foundation sounds as a point of reference to build the mix around.</p>
<p class="p3"><span class="s1">We should not get too hung up on soloing the acoustic guitar and worrying about how its sound has been changed by the EQ when heard in isolation. The reality in this situation is that i</span><span class="s1">t doesn’t matter what the acoustic guitar sounds like <em>in isolation</em> (i.e. when solo’d) because </span>the listener <em>is never going to hear it in isolation</em>, and that’s because it is never featured in the music. It remains a background textural sound. Therefore the only thing that really matters beyond its musicality is how it affects other sounds in the mix. In this example, the applied integrating EQ has allowed the guitar to sit nicely <em>behind</em> the piano rather than <em>under</em> it. As we will see in the following illustrations, the acoustic guitar’s spectrum (and therefore its tonality) has been altered to allow it to fulfil its spectral role in the mix: filling in the spaces between the other instruments.</p>
<p>The illustration below adds the kick drum’s spectrum (shown in orange) to the image so we can see how it works with the bass and the piano.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="261" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/17-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="17-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/17-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/17-pichi-600x218.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>As shown above, we have used the strategic application of integrating EQ to alter the perceived volume of each sound in the mix by a combination of boosting important parts of a given sound’s spectrum and/or cutting parts out of competing sounds’ spectrums, rather than making global ‘brute force’ fader changes. Each of these sounds was already <em>loud enough</em> in the mix, it just wasn’t <em>clear enough</em>, and we’ve used integrating EQ to clarify it.</p>
<p>The illustration below shows the spectrums before any integrating EQ was applied. There is too much overlap in significant parts of each sound’s spectrum, resulting in a poor mix that is lacking in clarity and tonal separation.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="261" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/18-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="18-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/18-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/18-pichi-600x218.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4><strong>Complementary EQ, Shared EQ &amp; Opposite EQ</strong></h4>
<p>In the <em>integrating EQ</em> example given above we introduced the concept of <em>complementary EQ</em>, where an EQ cut was accompanied by complementary boosts applied to the same sound, typically an octave (or other harmonically valid interval) above and/or below the centre frequency of the cut. If the cut was, say, -2dB at 880Hz, the complementary boosts would be placed at 440Hz (an octave below 800Hz) and 1.76kHz (an octave above 880Hz).</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="295" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/19-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="19-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/19-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/19-pichi-600x247.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Each boost would have the same bandwidth (Q) as the cut, and each boost would start at +1dB with the intention of collectively returning the 2dB that was lost to the cut (conceptually aiming to maintain the same overall energy in the signal but redistributing it within the spectrum). However, the amount of boost and the choice of frequencies will ultimately be decided by ear, because nobody cares about the theory if the end results don’t sound good.</p>
<p>Sometimes an <em>integrating EQ</em> dip has an adverse affect on the sound it is applied to, and in those situations we can resort to <em>shared EQ</em>. The <em>integrating EQ</em> example shown above started by placing a dip in the acoustic guitar’s frequency spectrum to clarify the piano sound. Let’s say the dip needed to be -3dB at 880Hz with a bandwidth (Q) of 1.5 in order to do its job, but the change to the acoustic guitar’s tonality was more than we were willing to accept. In this situation we can copy that same EQ on to the piano, and <em>share</em> the 3dB difference between the two instruments. For example, perhaps a dip of -2dB is acceptable on the acoustic guitar, and we can make up the difference with a +1dB boost in the same part of the spectrum on the piano without adversely affecting its tonality. Now we have created the same 3dB difference at 880Hz required between the piano and the acoustic guitar, but have changed it from one large EQ change on one instrument to two smaller EQ changes shared between two instruments.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="717" height="295" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/20-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="20-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/20-pichi.jpg 717w, https://www.audiotechnology.com/wp-content/uploads/2023/11/20-pichi-600x247.jpg 600w" sizes="(max-width: 717px) 100vw, 717px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">When using <em>integrating EQ</em> with sounds that are competing with each other but not in any clearly obvious or significant manner, </span><span class="s1">it’s worth taking advantage of differences between each sound’s frequency spectrum by using <em>opposite EQ</em>. For the <em>integrating EQ</em> example used earlier we saw that the piano sound had a peak in the upper range of its spectrum where the acoustic guitar did not, and we put a subtle dip of matching bandwidth (Q) at that place in the acoustic guitar’s spectrum to increase the separation between the two sounds. This use of <em>integrating EQ</em> might have no significant effect on the acoustic guitar’s sound (perhaps the acoustic guitar doesn’t contain much musical value in that area) but creating the dip will further separate the two sounds while bringing the piano forward in a way that sounds better than boosting the peak on the piano’s spectrum – which might sound unnatural or perhaps even make certain notes ‘ping’ out (for which the piano tuner ultimately takes the blame).</span></p>
<h4 class="p3"><strong><span class="s1">TONAL PERSPECTIVE</span></strong></h4>
<p class="p3"><span class="s1">After applying <em>corrective EQ</em>, <em>enhancing EQ</em> and <em>integrating EQ</em>, it is always important to check the <i>tonal perspective</i> of each sound. Does the tonality of each individual instrument sound as if it belongs in the same mix as the other instruments?</span></p>
<p class="p3"><span class="s1">It is easy to lose track of tonal perspective and end up with one or two sounds that are very good when heard in isolation while also maintaining clarity and separation in the mix, but <em>they don’t sound as if they belong in the same mix</em>, e.g. they’re considerably brighter or duller than the other sounds. They are not in the mix’s <em>tonal perspective</em>.</span></p>
<p class="p3"><span class="s1">This is the same problem that happens when we start combining sounds from different sample libraries, as mentioned earlier. Each sample library brand has their own sound engineers, producers and mastering engineers, therefore each sample library brand evolves its own ‘sound’ in the same way that some sound engineers, producers and boutique record labels evolve their own ‘sound’. The samples might <i>all</i> sound good individually, but there’s no guarantee (or likelihood) that samples from different sample library brands will work together without some kind of <em>integrating EQ</em>. It’s like sending all of the drum tracks but nothing else to one engineer to mix and master, all of the guitar tracks but nothing else to another engineer to mix and master, and all of the vocal tracks but nothing else to another engineer to mix and master – each engineer might do a great job on their parts, but there is no guarantee <i>or</i> likelihood that the individually mixed and/or mastered stems will automagically work together when combined in a mix. The individual sounds need to be tailored to fit together using <em>integrating EQ</em>, not simply layered on top of each other using fader levels.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="674" height="588" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/21-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="21-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/21-pichi.jpg 674w, https://www.audiotechnology.com/wp-content/uploads/2023/11/21-pichi-600x523.jpg 600w" sizes="(max-width: 674px) 100vw, 674px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">When EQing individual sounds within a mix we must ensure they all sound as if they belong in the same mix, i.e. they have the same <i>tonal perspective</i>. If one sound proves to be overly bright or overly dull within the mix it should be fixed in the mix, because fixing it later is going to take more time in re-mixing and/or more cost in mastering.</span></p>
<p class="p3"><span class="s1">After introducing any significant EQ changes to a sound – whether they’re <em>corrective</em>, <em>enhancing</em> or <em>integrating</em> – always solo the sound and switch the EQ in and out while checking on the spectrum analyser and the 6dB guide to make sure the sound’s tonality is behaving itself and not steering the mix towards being too bright or too dull.</span></p>
<p>By following the strategic step-by-step process demonstrated in this instalment, i.e. introducing one instrument at a time to our mix and checking it on the tools built into the Mixing With Headphones template, we can make high quality mixes in headphones that, <em>tonally</em> at least, should land within five minutes of mastering to sound acceptable.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"><h2 style="color: #d56d2e;text-align: left;font-family:Source Sans Pro;font-weight:700;font-style:italic" class="vc_custom_heading" >Next instalment: Dynamic Perspective. Coming soon…</h2><div class="vc_empty_space"   style="height: 24px"><span class="vc_empty_space_inner"></span></div></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1699314534051 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><strong><span class="s1">EQUAL LOUDNESS COMPENSATION</span></strong></h4>
<p class="p3"><span class="s1">In the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">second instalment</a></strong></span> of this six-part series we looked at the Equal Loudness Contours and saw how our hearing’s sensitivity to different frequencies changes with loudness. Here are those Equal Loudness Contours again…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="671" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/22-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="22-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/22-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/11/22-pichi-600x593.jpg 600w, https://www.audiotechnology.com/wp-content/uploads/2023/11/22-pichi-100x100.jpg 100w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>

	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">As we learnt in the second instalment, reducing the sound’s SPL means our hearing becomes less sensitive to the lower and higher frequencies compared to the mid frequencies of that sound. This doesn’t only affect our perception of the overall mix’s tonality, it also affects our perception of individual sounds <i>within</i> a mix. If a sound has been put into the correct tonal perspective with the other sounds but <i>then</i> turned down significantly in the mix to play an atmospheric background role, it has been shifted down to a lower Equal Loudness Contour than the other sounds and will therefore sound duller and lacking in low frequencies compared to the other sounds in the mix; it is no longer in the same <i>tonal perspective</i> and will easily get lost in the mix at times. A small EQ boost in the very high frequencies (above 8kHz) and the low frequencies (below 250Hz) can help these lost sounds remain clear and audible within the mix while retaining their tonal perspective. If an individual sound in the mix is intended to fade out to silence, consider automating a small high and low frequency boost that subtly <em>increases</em> as the sound’s level <em>decreases</em> in order to maintain its clarity as it fades out.</span></p>
<h4><strong>Radiant Fade Away</strong></h4>
<p>If we are working on a mix that has a long fade out – the kind where the music has been recorded beyond the intended fade out – we can take Equal Loudness Compensation one step further by applying a subtly increasing boost of high and low frequencies (i.e. fractions of a dB below 250Hz and above 8kHz) over the mix bus for the duration of the fade out. This maintains the mix<span class="s1">’</span>s tonal perspective and clarity all the way down to silence, and can have an excellent effect when the intention is to <em>fade out</em> the mix rather than <em>dull out</em> the mix.</p>
<p>The EQ curve shown below, based on the Equal Loudness Contours shown throughout this series, is the compensation curve required for a mix made at 80 Phons (the recommended monitoring level for mixing) to sound tonally correct if replayed at the Threshold of Audibility (0 Phons, or silence).</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="746" height="270" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/23-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="23-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/23-pichi.jpg 746w, https://www.audiotechnology.com/wp-content/uploads/2023/11/23-pichi-600x217.jpg 600w" sizes="(max-width: 746px) 100vw, 746px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The concept is simple: we apply an EQ curve like this over the mix bus, and automate it through the duration of the fade out (automating the EQ<span class="s1">’</span>s <em>blend</em> or <em>mix</em> control) so that all of the EQ settings are at 0dB at the start of the fade but have reached the levels shown on the curve by the end of the fade – as shown in the illustration below. This maintains a more consistent tonal balance as the mix <em>fades out</em> rather than <em>dulls out</em>. The mix continues to shine, all the way down to silence.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="826" height="650" src="https://www.audiotechnology.com/wp-content/uploads/2023/11/24-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="24-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/11/24-pichi.jpg 826w, https://www.audiotechnology.com/wp-content/uploads/2023/11/24-pichi-800x630.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/11/24-pichi-768x604.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/11/24-pichi-600x472.jpg 600w" sizes="(max-width: 826px) 100vw, 826px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The <span class="s1">‘</span>radiant fade away<span class="s1">’</span> technique keeps hooks and choruses audible for longer throughout the fade out, maintaining the listener<span class="s1">’s attention and keeping the music alive in their mind long after the end because the mix faded out but never <em>dulled out</em> as most mixes do; it doesn’t follow the traditional ‘end of song’ tonal trajectory.</span></p>
<p><span class="s1">As with all long fade outs, we must always keep it musically timed – meaning the last <em>clearly audible and identifiable note</em> at the end of the fade-out is also the last note of a measure, and the fade out reaches silence just before the first note of the next measure begins.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1699314690493 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><strong><span class="s1">THE MOST IMPORTANT SOUND IN THE MIX</span></strong></h4>
<p class="p3"><span class="s1">There is often a difference between what the composers, musicians, producers and sound engineers think is the most important sound in a mix, and what the listeners think is the most important sound in the mix.</span></p>
<h4 class="p3"><strong><span class="s1">Significance</span></strong></h4>
<p class="p3"><span class="s1">Ask someone who isn’t a composer or musician to sing their favourite song. They will sing the vocal lines, and in between they will mimic drum fills, instrumental solos, echo effects or <i>whatever</i> grabs and holds their attention in between the vocal lines. However, they will always jump straight back to the next vocal line <i>without missing a word</i>, meaning the vocal takes a higher priority in their perception than anything else in the mix. They’re the people <i>buying</i> the music, and they don’t care what the composer, musician, producer or engineer thought was the most important sound when starting the mix. All that matters to the music consumer is what sticks in their mind, what they look forward to hearing again, and what ultimately pushes them to a purchasing decision. This ‘sing your favourite song’ exercise tells us a lot about which parts of a mix are the most important to the listener. If there are vocals, it is invariably the vocals…</span></p>
<h4 class="p3"><strong><span class="s1">Insignificance</span></strong></h4>
<p class="p3"><span class="s1">Always remember that only guitarists, drummers and sound engineers <i>actually care</i> about how great the guitar or the snare sound is, and only guitarists, drummers and sound engineers buy recordings simply because they have a great guitar or snare sound. To everyone else those things are just another component of the mix with varying levels of importance. They’re not worth sacrificing the first hour of a three hour mix session for; as long as they serve their role in the music without distraction, listeners will simply assume that the sounds heard in the mix are the sounds that the artist intended. In contrast, the voice is something <i>everyone</i> can play (whether singing or talking), and <i>all</i> listeners will notice a poor vocal sound. Spending the first hour of a three hour mix getting the vocal right is a smarter use of time then spending it on the guitar or snare sound.</span></p>
<p class="p3"><span class="s1">The same logic and thinking can be applied to instrumental music; focus on what holds the listener’s attention, and make sure there is <i>always</i> something to hold the listener’s attention – if there’s nothing in the music at a given time, fill the space with an echo or similar effect. It is up to the composer and musicians to provide the notes, and the engineer to deliver those notes with clarity and separation while also using the gaps between the notes as required and/or appropriate.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div>
</section><p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-4">Mixing With Headphones 4</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.audiotechnology.com/tutorials/mixing-with-headphones-4/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Mixing With Headphones 3</title>
		<link>https://www.audiotechnology.com/tutorials/mixing-with-headphones-3</link>
					<comments>https://www.audiotechnology.com/tutorials/mixing-with-headphones-3#comments</comments>
		
		<dc:creator><![CDATA[Greg Simmons]]></dc:creator>
		<pubDate>Sun, 29 Oct 2023 22:23:52 +0000</pubDate>
				<category><![CDATA[Issue 91]]></category>
		<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[greg simmons]]></category>
		<category><![CDATA[issue]]></category>
		<category><![CDATA[Mixing With Headphones]]></category>
		<category><![CDATA[Part 3]]></category>
		<category><![CDATA[tutorial]]></category>
		<guid isPermaLink="false">https://www.audiotechnology.com/?p=77130</guid>

					<description><![CDATA[<p> [...]</p>
<p><a class="btn btn-secondary understrap-read-more-link" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">Read More...</a></p>
<p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">Mixing With Headphones 3</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element  drop-cap" >
		<div class="wpb_wrapper">
			<p><span class="s1">In the previous instalment of this six-part series we looked at the fundamental differences between mixing with headphones and mixing with speakers. We saw that speaker monitoring introduces a lot of variables to our mix because what we are hearing from the speakers is <i>not</i> what is coming out of the mixing console or DAW. The frequency response and distortion of our speakers has been embedded into it; the acoustics of our listening room have been superimposed upon it; and there might be comb-filtering issues due to reflections off nearby surfaces that have influenced our tonal decisions. Compensating for those variables during the course of a mixing session adds resilience to our speaker mixes and thereby improves how well they translate to other listening environments.</span></p>
<p class="p3"><span class="s1">None of those variables occur when monitoring with headphones, and therefore our headphone mixes don’t get the same resilience built into them – meaning they don’t translate to numerous playback situations as well as speaker mixes do. There are, however, a number of tools and hacks we can use to reveal and/or emulate those variables and compensate for them.</span></p>
<h4 class="p3"><span class="s1"><b>HEADPHONE MIXING TOOLS &amp; HACKS</b></span></h4>
<p class="p3"><span class="s1">In every discussion about audio metering devices and similar tools there’s always someone offering the seemingly well-intentioned advice of “just trust your ears”. Such platitudinal </span><span class="s2">nonsense</span><span class="s1">, comforting though it might be, always needs to be taken in context <i>before</i> being summarily dismissed with the same “you don’t need all of that stuff” gusto that accompanied it. Why?</span></p>
<p class="p3"><span class="s1">It usually comes from experienced people who have already made enough expensive and/or regrettable mistakes to know what to listen for, <i>and</i> who are working in situations that provide enough information to allow informed decision-making (ie. working in acoustically-designed control rooms fitted with big monitor speakers). They have also been receiving years of feedback from downstream mastering engineers and others, which has further refined their mixing skills. In other words, they have the right combination of equipment, experience and listening skills that allows them to trust what their ears are telling them.</span></p>
<p class="p3"><span class="s1">The same ‘feel good’ advice is often parroted by novices, wannabes and wish-casters who embraced it earlier and are diligently waiting for it to ‘kick in’ and prove true – until then, their ‘trust your ears’ mixes are deteriorating while their mastering engineer’s income is improving.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">How wonderful would it be if the entirety of audio engineering could be summed up with “just trust your ears”? There would be no need for all of the eye-glazing maths and physics; no need for the many thousands of words and illustrations in audio engineering textbooks; no need for audio courses; no need for acousticians; and no need for sound engineers. Audio engineering would be as intuitive as walking – it only gets hard if you think about how you are doing it.</span></p>
<p class="p3"><span class="s1">The people asking the questions that trigger the &#8216;just trust your ears&#8217; response don’t have the required combination of equipment, experience and listening skills to be <i>able</i> to trust what their ears are telling them – which is why they are asking such a question in the first place. Telling them to &#8216;just trust their ears&#8217; is misleading at best, and flexing at worst – especially if it is given in reference to mixing with headphones. No matter how much expertise the people offering such advice might have, they obviously don’t have the common sense required to properly contextualise the question and either provide a <em>meaningful</em> answer or STFU</span><span class="s1">. As has been repeated many times by numerous leading figures throughout history, “if your words are not better than silence, then be silent”.</span></p>
<p class="p3"><span class="s1">We’ve already established that a number of variables are missing in headphone monitoring that exist in speaker monitoring. This means we cannot simply &#8216;trust our ears&#8217; when mixing in headphones because our ears are not getting enough information to make reliable decisions. We can, however, benefit from tools that allow us to <i>see on a screen</i> what we <i>don’t hear in headphones</i> and thereby provide us with meaningful visual guidelines. Staying within those visual guidelines allows us to trust our ears for everything else, and hopefully make headphone mixes that translate well across <i>all</i> playback systems in the same way that a good speaker mix does.</span></p>
<p class="p3"><span class="s1">What do we need? Read on…</span></p>
<h4 class="p3"><span class="s1"><b>HEADPHONES &amp; FREQUENCY RESPONSE</b></span></h4>
<p class="p3"><span class="s1">The requirement for good headphones goes without saying, of course, for all of the frequency response and room acoustics reasons outlined in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">previous instalment</a></strong></span>. A pair of contemporary headphones, voiced to the Harman curve or similar, should take care of the frequency response aspects of the translation problem and prevent any significant tonal surprises when a mix made on headphones is heard through speakers.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-3989" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-3989 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >Audio engineering would be as intuitive as walking…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-9084" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-9084 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="875" height="534" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/01-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="01-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/01-pichi.jpg 875w, https://www.audiotechnology.com/wp-content/uploads/2023/10/01-pichi-800x488.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/01-pichi-768x469.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/01-pichi-600x366.jpg 600w" sizes="(max-width: 875px) 100vw, 875px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">There are many headphones on the market that are suitable for mixing. As a generalisation, open-back headphones provide higher fidelity, especially at low frequencies, but closed-back headphones have the advantage when isolation is required.</span></p>
<p class="p3"><span class="s1">Headphones with active noise-cancellation are not recommended for mixing, and neither are wireless headphones. Active noise-cancelling headphones use polarity inversion and equalisation to reduce (ie. cancel) the audibility of background sounds (ie. noise). Wireless headphones use data compression algorithms to reduce the signal’s bitrate so it can be transmitted wirelessly without drop-outs and buffering issues. Although each technology provides an enjoyable <em>listening</em> experience, neither can be trusted for <em>mixing</em>.</span></p>
<p class="p3"><span class="s1">If you plan on mixing through the headphone socket of a laptop or similar portable device – rather than using an audio interface or a dedicated headphone amplifier – you’re going to need headphones with <i>high sensitivity</i> and <i>low impedance</i>. Why? Because they’re easier to drive to useful SPLs from the low voltage amplifiers found in battery-powered equipment such as mobile devices. To understand why, scroll down to ‘Impedance, Power, Sensitivity &amp; SPL’.</span></p>
<h4 class="p3"><span class="s1"><b>6dB GUIDE &amp; FREQUENCY BALANCE</b></span></h4>
<p class="p3"><span class="s1">Despite the voicing methods described in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">previous instalment</a></strong></span> that aim to reduce tonal discrepancies between headphones and speakers, when mixing on headphones it is still easy to get sidetracked towards making a mix that is too bright or too dull – especially if the first sounds introduced to the mix are too bright or too dull and the rest of the mix is built around them. How do we keep ourselves on track? This is where the 6dB guide can be helpful…</span></p>
<p class="p3"><span class="s1">Many EQ plug-ins offer a spectrum analyser, and <i>some</i> of those spectrum analysers offer a ‘6dB guide’. This appears as a diagonal line beginning at 0dB at 1kHz and descending downwards at a rate of -6dB/octave as the frequency gets higher.</span></p>
<p class="p3"><span class="s1">If we listen to a number of well-engineered recordings while studying how their frequency spectrums compare to the 6dB guide, we’ll notice an interesting trend. Mixes that <i>sound like</i> they have a good balance of energy throughout the frequency spectrum tend to conform to the 6dB guide, as do direct-to-stereo purist recordings of acoustic music that are made with ‘accurate’ microphones (ie. those with a flat frequency response) and that are often described as sounding ‘natural’ or ‘pure’. Meanwhile, mixes that sound excessively bright will rise noticeably above the 6dB guide line, and mixes that sound excessively dull with fall noticeably below the 6dB guide line.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="866" height="549" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/02-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="02-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/02-pichi.jpg 866w, https://www.audiotechnology.com/wp-content/uploads/2023/10/02-pichi-800x507.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/02-pichi-768x487.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/02-pichi-600x380.jpg 600w" sizes="(max-width: 866px) 100vw, 866px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">Conforming to the 6dB guide does not guarantee that a mix has a good frequency balance, but it does offer a good point of reference – especially if used in conjunction with your musical reference track (see below).</span></p>
<h4 class="p3"><span class="s1"><b>Frequency Response Tools You Don’t Need</b></span></h4>
<p class="p3"><span class="s1">There are currently a number of devices and apps on the market that use DSP to ‘correct’ the frequency response and other sonic characteristics of numerous headphones. The idea is simple: enter the make and model of the headphones into the app and – assuming the manufacturer has already created a profile for those headphones – a compensating process will be inserted into the listening path to make the headphones sound ‘right’, or perhaps even make them sound like more expensive headphones.</span></p>
<p class="p3"><span class="s1">At best, such listening tools are just one more thing in the monitoring path affecting our decision making. The idea of monitoring equalisation and DSP correction has validity in the sound reinforcement world and also <em>debatably</em> in the recording studio world, which are both cases where room acoustics issues can be compensated for. As we’ve previously established, room acoustics problems don’t exist with headphones. I</span><span class="s1">t’s reasonable to assume that long-established professional headphone manufacturers like AKG, BeyerDynamic, Sennheiser (which also owns Neumann) et al know what they’re doing. Their contemporary headphones reflect decades of refinement as detailed in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">previous instalment</a></strong></span> of this series, and shouldn’t need any compensation.</span></p>
<p class="p3"><span class="s1">It’s also worth remembering the difference between <i>listening</i> with headphones and <i>mixing</i> with headphones. This important difference is often overlooked by engineers when choosing headphones. As with choosing studio monitors, it is not enough to simply listen to how well they reproduce music – the real test is how well they help us make good mixes. What they reveal about our mix decisions is more important than how much enjoyment they offer. Most of the DSP-based headphone correction tools are intended to provide an improved listening experience for audiophiles, <i>not</i> create a more revealing mixing environment. Any headphones that need equalisation to make them suitable for mixing are the wrong choice to begin with.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="690" height="590" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/03-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="03-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/03-pichi.jpg 690w, https://www.audiotechnology.com/wp-content/uploads/2023/10/03-pichi-600x513.jpg 600w" sizes="(max-width: 690px) 100vw, 690px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">An interesting tool that sits somewhere between the 6dB guide mentioned earlier and the ‘correction’ equalisation mentioned above is one that performs a spectral analysis of our mix, compares it to the typical frequency spectrum of reference mixes of the same genre, and advises which parts of the mix’s spectrum need more or less energy when compared to the spectrums of the references. This is a very useful tool for people mixing on affordable nearfield monitors that don’t reliably reproduce much below 80Hz and who are working in rooms without much acoustic design or treatment, and are therefore literally ‘flying blind’ when working with low frequencies. However, with good headphones and an appropriate reference track for the genre (as discussed in ‘Reference Tracks’, below) this type of tool shouldn’t be necessary because, from a spectral point of view, headphones voiced to the Harman target or similar provide a situation where we <i>can</i> trust what we’re hearing.</span></p>
<h4 class="p3"><span class="s1"><b>PHASE &amp; INTERAURAL CROSSTALK</b></span></h4>
<p class="p3"><span class="s1">Since the beginning of stereo headphone listening there have been <i>crosstalk generators</i>: circuits and algorithms that attempt to recreate the sensation of speaker listening by introducing interaural crosstalk to headphone listening. Their mere existence confirms what we’ve already seen throughout this series: speaker listening adds a number of variables that don’t exist with headphone listening. As we also saw in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">previous instalment</a></strong></span>, compensating for those variables makes our mixes more resilient and thereby offers better translation across a wider range of playback systems. If we’re trying to re-introduce those variables to our mixes in order to compensate for them, it makes sense to use a crosstalk generator in our monitoring path.</span></p>
<p class="p3"><span class="s1"><i>Or does it?</i></span></p>
<p class="p3"><span class="s1">No. For our purposes we’re not interested in the interaural crosstalk itself – we’re interested in the <i>affect</i> it has on our mixing decisions. We can find that out by using a goniometer and a mono switch&#8230;</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="690" height="590" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/04-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="04-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/04-pichi.jpg 690w, https://www.audiotechnology.com/wp-content/uploads/2023/10/04-pichi-600x513.jpg 600w" sizes="(max-width: 690px) 100vw, 690px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><span class="s1"><b>Goniometer &amp; Phase Correlation</b></span></h4>
<p class="p3"><span class="s1">The goniometer provides a visual indication of polarity and phase differences between the left and right channels of a stereo mix, hence it is often referred to as a <i>phase scope</i>, a <i>phase meter</i> or a <i>phase correlation meter</i> – although the latter term usually refers to a much simpler meter that has a linear scale from -1 to +1, as shown below:</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="837" height="284" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/05-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="05-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/05-pichi.jpg 837w, https://www.audiotechnology.com/wp-content/uploads/2023/10/05-pichi-800x271.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/05-pichi-768x261.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/05-pichi-600x204.jpg 600w" sizes="(max-width: 837px) 100vw, 837px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">If the correlation indicator spends a lot of time between -1 and zero it means there are serious phase and/or polarity issues in the mix; those sorts of problems would probably be audible in speaker listening (due to interaural crosstalk) but can be very hard to notice in headphones.</span></p>
<p class="p3"><span class="s1">The phase correlation meter shown above is <i>almost but not quite</i> as helpful as the goniometer for our purposes: it shows the total correlation of the left and right channel signals, but its one-dimensional display and slower weighting prevents us from easily seeing into the mix and finding out which individual signals are correlating and which signals are not. So we’re back to the goniometer, which moves fast enough and in enough dimensions for us to identify individual sounds within the mix.</span></p>
<p class="p3"><span class="s1">A small dot, typically green or blue (a nod to the cathode ray screens used in early goniometers), is moved around the screen using the instantaneous magnitudes and polarities of the left and right audio signals as rectangular coordinates – rather like a high speed game of Battleship but where ‘0,0’ is the centre of the board. The rapidly moving dot leaves a momentary trail of light, or <i>trace</i>, behind it that is sometimes referred to as a ‘Lissajous figure’ or ‘Lissajous curve’. It provides helpful insights into the instantaneous polarity and phase relationships of the left and right channels of our mix and how they might interact due to interaural crosstalk, but <i>only</i> if we know how to interpret it. Here’s how…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="732" height="731" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/06-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="06-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/06-pichi.jpg 732w, https://www.audiotechnology.com/wp-content/uploads/2023/10/06-pichi-300x300.jpg 300w, https://www.audiotechnology.com/wp-content/uploads/2023/10/06-pichi-600x599.jpg 600w, https://www.audiotechnology.com/wp-content/uploads/2023/10/06-pichi-100x100.jpg 100w" sizes="(max-width: 732px) 100vw, 732px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">The goniometer’s display is divided into four equal-sized quarters, or ‘quadrants’, as shown above. </span><span class="s1">The top and bottom quadrants (shaded in green) represent points in the mix when the two channels have the same polarity and will combine <i>constructively</i> – in other words, if we added their amplitudes together the resulting magnitude will be higher than the highest of the two individual channel magnitudes at that point in time. When the trace (represented here as a blue dot in the centre) is in either of these quadrants it means both channels are simultaneously pushing the signal towards us or pulling it away from us, working together to create a very stable phantom image with better impact.</span></p>
<p><span class="s1">The side quadrants (shaded in red) represent moments in the mix when the two channels have opposing polarities and will combine <i>destructively</i> – in other words, if we added their amplitudes together the resulting magnitude will be lower than the highest of the two individual channel magnitudes at that point in time. When the trace is in either of these quadrants it means one channel of the stereo mix is pushing the signal towards us while the other channel is pulling it away from us, resulting in a vague phantom image without much impact.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1595296124081 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner vc_custom_1595990674300"><div class="wpb_wrapper"><div id="bsa-block-970--450" class="bsaProContainerNew bsaProContainer-86 bsa-block-970--450 bsa-pro-col-1" style="display: block !important"><div class="bsaProItems bsaGridNoGutter " style="background-color:"><div class="bsaProItem bsaReset" data-animation="fadeIn" style=""><div class="bsaProItemInner" style="background-color:"><div class="bsaProItemInner__thumb"><div class="bsaProAnimateThumb" style="display: block;margin: auto;"><a class="bsaProItem__url" href="https://www.audiotechnology.com/advertise?sid=86&bsa_pro_id=872&bsa_pro_url=1" target="_blank"><div class="bsaProItemInner__img" style="background-image: url(&#39;https://www.audiotechnology.com/wp-content/uploads/bsa-pro-upload/1701057146-NAS_Fifty Line_DA-min.gif&#39;)"></div></a></div></div></div></div></div></div><script>
			(function($){
				function bsaProResize() {
					var sid = "86";
					var object = $(".bsaProContainer-" + sid);
					var imageThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__img");
					var animateThumb = $(".bsaProContainer-" + sid + " .bsaProAnimateThumb");
					var innerThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__thumb");
					var parentWidth = "970";
					var parentHeight = "450";
					var objectWidth = object.parent().outerWidth();
//					var objectWidth = object.width();
					if ( objectWidth <= parentWidth ) {
						var scale = objectWidth / parentWidth;
						if ( objectWidth > 0 && objectWidth !== 100 && scale > 0 ) {
							animateThumb.height(parentHeight * scale);
							innerThumb.height(parentHeight * scale);
							imageThumb.height(parentHeight * scale);
							object.height(parentHeight * scale);
						} else {
							animateThumb.height(parentHeight);
							innerThumb.height(parentHeight);
							imageThumb.height(parentHeight);
							object.height(parentHeight);
						}
					} else {
						animateThumb.height(parentHeight);
						innerThumb.height(parentHeight);
						imageThumb.height(parentHeight);
						object.height(parentHeight);
					}
				}
				$(document).ready(function(){
					bsaProResize();
					$(window).resize(function(){
						bsaProResize();
					});
				});
			})(jQuery);
		</script>						<script>
							(function ($) {
								var bsaProContainer = $('.bsaProContainer-86');
								var number_show_ads = "0";
								var number_hide_ads = "0";
								if ( number_show_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeIn(); }, number_show_ads * 1000);
								}
								if ( number_hide_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeOut(); }, number_hide_ads * 1000);
								}
							})(jQuery);
						</script>
						</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">When the dot ventures into either of the side quadrants it means there is an instantaneous polarity difference between the two channels due to either a polarity difference or a significant phase difference between two or more sounds within the mix. That’s <i>exactly</i> the kind of problem we need the goniometer to expose because it is difficult to identify when mixing in headphones but is readily noticeable when heard through speakers – assuming we know what to listen for. If a significant portion of the mix ventures into the side quadrants of the goniometer we should check the mix in mono through headphones or through a stereo speaker system; if there is a clearly audible problem in mono then we need to find the cause and fix it.</span></p>
<p class="p3"><span class="s1">Note that many reverberation and similar stereo time-based effects will create phase and polarity differences between channels as part of their effect, and in these cases it is up to us to decide if it’s a problem or not. If we mute and un-mute the effect repeatedly while watching the goniometer we should be able to identify what is going on and decide whether it’s a problem or not.</span></p>
<p class="p3"><span class="s1">The illustration below shows a number of goniometer displays and what they mean…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1012" height="373" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/07-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="07-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/07-pichi.jpg 1012w, https://www.audiotechnology.com/wp-content/uploads/2023/10/07-pichi-800x295.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/07-pichi-768x283.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/07-pichi-600x221.jpg 600w" sizes="(max-width: 1012px) 100vw, 1012px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">If we spend enough time observing different professionally mixed and mastered recordings on the goniometer while listening through speakers, we will notice an interesting trend. Mixes that stay mostly in the upper and lower quadrants will tend to sound clean and solid due to their good stereo correlation (where both channels are reinforcing each other), and will probably not change significantly when summed to mono. Mixes that have a lot of information in the side quadrants will tend to sound messy and vague due to their low stereo correlation (where the channels are diminishing each other), and will change significantly when summed to mono. [The descriptive terms given above may seem over-dramatic, but they will make sense to anyone who has spent enough time watching a goniometer while listening to many different recordings: mixes with good stereo correlation leave a different sonic fingerprint than mixes with poor stereo correlation.]</span></p>
<p class="p3"><span class="s1">The top quadrant of the goniometer also serves as a panning meter, as shown below. A single mono sound source panned hard left will appear as a diagonal line from the upper left to the lower right of the screen. Conversely, a single mono sound panned hard right will appear as a diagonal line from the lower left to the upper right of the screen. A sound panned to the centre will be a vertical line from top to bottom. If you pan a mono sound source from left to right, you should see a single straight line rotating from hard left (45° left of centre) to hard right (45° right of centre) on the goniometer.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="1012" height="555" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/08-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="08-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/08-pichi.jpg 1012w, https://www.audiotechnology.com/wp-content/uploads/2023/10/08-pichi-800x439.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/08-pichi-768x421.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/08-pichi-600x329.jpg 600w" sizes="(max-width: 1012px) 100vw, 1012px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">When mixing, it is always worthwhile soloing the individual tracks/channels and checking them on the goniometer. If all of the individual sounds (mono or stereo) stay within the upper and lower quadrants, and the only things that enter the side quadrants are spatial effects like reverberation, room mics and similar that <em>rely</em> on phase differences or arrival time differences between channels to create their effect, your mix is probably going to translate well to speakers and other headphone systems.</span></p>
<p class="p3"><span class="s1">The goniometer is particularly helpful when setting up drum overheads using two widely spaced microphones. If you solo the overhead mics (while panned hard left and hard right) and alter the spacing of the two microphones just enough to reduce the amount of signal getting into the side quadrants, the overall drum mix will benefit when heard through speakers or summed to mono because the overheads are reinforcing the overall drum sound rather than diminishing it.</span></p>
<h4 class="p3"><span class="s1"><b>Mono Switch</b></span></h4>
<p class="p3"><span class="s1">The mono switch can be very helpful for making a ‘worst case’ version of your mix and highlight (if not <i>exaggerate</i>) any interaural crosstalk problems that might exist when the mix is heard through speakers.</span></p>
<p class="p3"><span class="s1">Most mixing consoles – whether hardware or software – include a mono switch that sums the stereo bus to mono. If not, it will be available on a plug-in that you can insert over the stereo mix bus and switch on and off as desired.</span></p>
<h4 class="p3"><span class="s1"><b>DESKTOP MONITORS</b></span></h4>
<p class="p3"><span class="s1">Throughout this series we’ve discussed how to mix on headphones and thereby avoid the need for big studio monitors and the acoustic treatments required to make the most of them. We’ve discussed the ‘variable compensations’ that intrinsically happen when mixing on speakers but not when mixing on headphones, and we’ve discussed ways of emulating and/or building them into our headphone mixes.</span></p>
<p class="p3"><span class="s1">If we <i>really</i> want to make ‘market relevant’ headphone mixes that also translate well to speaker playback, it makes sense to have some speakers in our monitoring chain as a cross-referencing tool. They don’t need to be expensive big monitors with a flat frequency response and good low frequency extension, and they don’t need to be super accurate – headphones easily satisfy all of those requirements at a fraction of the price of big monitors and their associated room acoustic treatments. The main things the desktop monitors need to do are reveal how the individual sounds in our mix will interact with each other when combined in the air, while also confirming panning decisions, and helping us to find the right balance for reverbs and other spatial effects that are difficult to judge in headphones. This means the main requirement for the desktop monitors is to image well, and few speakers image as well as single wide-range drivers in small enclosures such as those offered by Auratone, Grover Notting et al…</span></p>
<p class="p3"><span class="s1">When configured in an equilateral triangle with the listener, as detailed in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">previous instalment</a></strong></span>, and with an appropriate absorptive material on our work surface to minimise comb filtering due to first order reflections off the work surface, these small speakers can provide a remarkably useful spatial reference for checking panning, reverberation levels and other spatial decisions that are difficult to judge on headphones. In essence, they fill in the gaps between headphone mixing and speaker mixing without resorting to expensive big monitors and the room acoustic treatments that are inevitably required to make the most of them.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="593" height="779" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/09-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="09-pichi" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">It would be great if we could use the speakers built into our laptops or tablets for this purpose, but those speakers cannot be trusted for panning and spatial decisions. Contemporary mobile devices have remarkably good sound quality for their size, but their internal speaker systems often include built-in spatial processing that’s designed to ‘throw’ the stereo image wider than the device itself. This allows built-in speakers that are typically less than 30cm apart to create a stereo soundstage that spreads to about 55cm apart (ie. about ±30° wide for the listener, as required for stereo speaker listening) when the user is at a typical viewing/working distance from the screen. It does this using clever manipulations of the stereo signal to fool the listener into perceiving a wider soundstage than seems possible under the circumstances. This spatial processing provides an impressive speaker <em>listening</em> experience for music and movies, but we cannot trust it for speaker <i>mixing</i> because it is exaggerating every panning and spatial decision we make to suit the device’s specific speaker placements and its specific spatial processing, which means there is no guarantee that our panning and spatial decisions will translate well to other systems. No matter how familiar we are with the <i>tonality</i> of our portable device’s sound, things get very different when we try to make <i>spatial decisions</i> with it because some things will be exaggerated and thereby mislead us to under-compensate, and other things will be downplayed and thereby mislead us to over-compensate.</span></p>
<p class="p3"><span class="s1">This brings us back to a small pair of single-driver desktop monitors that take up little space on the desk and are not intended to be anything other than spatial cross-referencing monitors. <i>That’s</i> what we need…</span></p>
<h4 class="p3"><span class="s1"><b>REFERENCE TRACKS</b></span></h4>
<p class="p3"><span class="s1">There are two reference tracks we should have for every headphone mix.</span></p>
<p class="p3"><span class="s1">The first is a stereo imaging test, the sort that’s widely available for testing hi-fi systems and can be found on-line and on every audiophile test disc ever made [Google ‘stereo imaging test’]. Ideally it will have tone bursts or dialogue panned to specific locations within the stereo mix. Listening to this allows us to ‘settle in’ to the stereo soundstage we’re working within, identifying the locations of the five most important reference points – hard left, mid-left, centre, mid-right, and hard right – and familiarising ourselves with where those locations appear in the soundstage created by our chosen headphones.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p3"><span class="s1">As discussed in the previous instalment, we know that wherever hard left appears in our headphones will appear at 30° left-of-centre when heard on speakers, and wherever hard right appears in our headphones will appear at 30° right-of-centre when heard on speakers. This allows us to create a ‘panning map’ of where things should be panned in the headphones based on where we want them to appear when/if heard on speakers.</span></p>
<p class="p3"><span class="s1">The second reference track is a musical reference for perspective. This should be a well-engineered recording of a similar style, genre, balance or production as the mix we’re preparing to make. Note here that ‘well-engineered’ <i>actually means</i> ‘well-engineered’ – in other words, something that has been well-recorded, well-mixed and well-mastered. Just because you like it doesn’t mean it is well-engineered; neither does its commercial success or how many awards it has won. If you can hear all of the sounds in the mix clearly at all times, it has probably been recorded, mixed and mastered well. If the vocal and solo performances are the only sounds that can be clearly heard at the times they occur during the mix, you’re listening to a poorly engineered mix that’s been cleverly mastered to keep the listener’s attention focused on the main instruments and away from the poor mix taking place behind them. In professional audio parlance this is known as a ‘polished turd’; mixes like this keep a lot of mastering engineers and multi-band compressor manufacturers/developers in business, but are never good references…</span></p>
<p class="p3"><span class="s1">There are recordings in every genre that are considered ‘well-engineered’, and there are recordings from similar genres that are close enough aesthetically (ie. similar tonalities and balances of individual sound sources, and similar use of effects) to serve as references. As we’ll see later, this reference is something we will be regularly comparing our mix-in-progress against to make sure we are remaining within the tonal and spatial ballpark of the genre’s aesthetic. Hopefully our finished mixes should not require too much corrective work in mastering, thereby freeing up more time for the creative aspects of mastering.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-5467" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-5467 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >This allows us to create a panning map…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-3728" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-3728 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="917" height="645" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/10-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="10-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/10-pichi.jpg 917w, https://www.audiotechnology.com/wp-content/uploads/2023/10/10-pichi-800x563.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/10-pichi-768x540.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/10-pichi-600x422.jpg 600w" sizes="(max-width: 917px) 100vw, 917px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><span class="s1"><b>MIXING WITH HEADPHONES TEMPLATE</b></span></h4>
<p class="p3"><span class="s1">Now that we have our headphone mixing essentials together – good headphones, a goniometer, mono switching, a spectrum analyser with 6dB guide, a stereo imaging reference track, a musical genre reference track, and hopefully a pair of small desktop monitors as described earlier – we need to create a ‘Mixing With Headphones’ template session that we can use for all of our headphone mixing.</span></p>
<p class="p3"><span class="s1">This is essentially an ‘empty’ session file with everything we need in place, so that all we have to do is load the tracks and start mixing – unless we decide to record our session directly into the template.</span></p>
<p class="p1"><span class="s1">We’ll start by setting up a channel strip that we can duplicate as often as we need. We need to configure the channel strip as shown below, with three EQ plug-ins and one compressor plug-in. We will set up the channel strip following the traditional analogue studio approach: plug-ins that create a <i>replacement of the original signal</i> (eg. EQ and compression) are inserted directly into the channel strip, while plug-ins that create something that needs to be <i>mixed with the original signal</i> (eg. delays, echoes and reverberation) are connected via auxiliary sends and brought back into the mix through their own channels where we can EQ them and/or send them to other effects if desired.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="480" height="654" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/11-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="11-pichi" loading="lazy" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">The first plug-in is a <i>corrective EQ</i> that is there to clean up any sounds before further processing them. It should be a clean EQ that is not intended to impart any tonality or character of its own on the sound. The emphasis here is an EQ that is capable and versatile rather than euphonic. A four-band fully parametric EQ with high and low pass filtering and the option to switch the lowest and highest bands to shelving is a good choice here.</span></p>
<p class="p1"><span class="s1">The second plug-in is an <i>enhancing EQ</i> that we will use to ‘create’ the sound we want. This can be an EQ with character to introduce some euphonics into the sound if desired, and it is absolutely okay to start off with the same ‘character’ EQ in every channel strip. Remember, all of the famous analogue mixing consoles throughout history offered their own EQ and it was <em>the same in every channel strip</em>. That didn’t stop anyone from making great records that are still revered today, so don’t get too hung up about having lots of different EQ plug-in options. Leave that distracting bullshit on Youtube where it belongs and get the mix started. You can change the EQ plug-in later if desired, just as we did in the analogue studio world where we would track on a Neve to get that warm musical Neve sound and then mix on an SSL to add that big and macho SSL sound: the best of both worlds, but with only two EQs overall (Neve for tracking, SSL for mixing). </span><span class="s1">“I love the sound of that combination of different EQs, that’s why I bought this record”, said nobody ever — except for sound engineers, recording musicians, and their too-old-for-trainsets Youtubey ilk.</span></p>
<p class="p1"><span class="s1">The third plug-in is a <i>corrective compressor</i>, the sort that has controls for threshold, ratio, attack and release times, and an output level control. As with the <em>corrective EQ</em>, we don’t want something that’s going to add any particular character. We can swap it for something different during the mix if necessary, but to get the mix started we just need something to get the track’s dynamics under control in a predictable manner.</span></p>
<p class="p1"><span class="s1">The fourth plug-in is an <i>integrating EQ</i>. It’s job is to help us integrate the sound from the channel strip into the mix’s tonal perspective, and should be a similar choice as the first EQ because its job is corrective. The detailed application of these four plug-ins will be explained in the following instalment.</span></p>
<p class="p1"><span class="s1">Now that we’ve got the channel plug-ins sorted, we need to get the required metering and monitoring capabilities in place over the mix bus. We want to start with a mono switch, which might be available within the DAW. Ideally, all of the other metering tools will be placed <i>after</i> the mono switch so that we can <em>see</em> the effect of the mono switch in the metering, rather than just hearing it. </span><span class="s1">We also need a goniometer, a spectrum analyser with 6dB guide, and bus metering that shows levels with LUFS and dBTP.</span></p>
<p class="p3"><span class="s1">Insert the mono switch (if there isn’t already one in place on the stereo bus of your mixing console or DAW), the goniometer, the spectrum analyser with 6dB guide and the metering over the stereo mix bus where they are constantly monitoring whatever we’re hearing. They’ll show us the mix when we’re mixing, and they’ll show us individual tracks when we’re soloing.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="642" height="683" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/12-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="12-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/12-pichi.jpg 642w, https://www.audiotechnology.com/wp-content/uploads/2023/10/12-pichi-600x638.jpg 600w" sizes="(max-width: 642px) 100vw, 642px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">iZotope’s Ozone has always been a good choice for this type of stereo mix bus metering/monitoring because it contains a goniometer, a spectrum analyser with a 6dB guide, mono switching, and excellent level metering capabilities. Most of these metering tools will work even if the processing is bypassed or switched off, meaning they are just metering tools and won’t have any impact on our mixes unless we want them to. Other plug-in manufacturers make goniometers, stereo/mono switches and spectrum analysers with 6dB guides, so if you don’t have Ozone – or don’t like how much screen space it consumes – rifle through your arsenal of plug-ins to see what’s there.</span></p>
<p class="p3"><span class="s1">Load your reference tracks into the top tracks of your DAW. (If they have a different sampling rate than your mix you will need to run them through a sample rate conversion before loading them into the session.) These are both stereo signals and each will therefore require a stereo track (or two mono tracks panned hard left and hard right) from your DAW. Load the stereo imaging track into the first stereo track of the mixing template, and the musical reference track into the second stereo track of the mixing template. </span><span class="s1">Using clip gain or a gain plug-in, adjust the individual levels of these references tracks so that when their faders are at 0dB each track’s metered level is sitting at or around your mixing reference level on the stereo mix bus (typically -20dBFS or 0dBVU) when solo’d and should therefore be at your calibrated monitoring level of around 80dB SPL (assuming you are monitoring at your calibrated level as described in the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">previous instalment</a></strong></span>).</span></p>
<p class="p3"><span class="s1">These tracks should be the first things you listen to before starting the mix – one after the other – and will acclimatise your listening to the imaging of your headphones, how they reproduce the desired tonality of the mix, and how loud you should be working. After those initial listens, these tracks will stay muted during your mixing session but will always be ready to cross-reference with a press of the return key, a touch of the solo button and perhaps a bit of fiddling with the mute key.</span></p>
<h4 class="p3"><span class="s1"><b>BRING IT ON…</b></span></h4>
<p class="p3"><span class="s1">With the ‘Mixing With Headphones’ template we have ready access to a stereo imaging reference track for determining where panned images should appear in our headphones, and a musical reference track for checking how our mix decisions compare to a known and relevant reference. We also have the goniometer to show which parts of our mix might sound weird when heard through speakers, a mono switch to check if problems seen on the goniometer will result in any audible effect, and the 6dB guide to keep us from wandering too far from the acceptable mix tonality track. We can now load all of our audio tracks into the session template – if they’re not already there – and start mixing.</span></p>
<p class="p3"><span class="s1">In the next instalment of this series we’ll look at some important considerations for mixing with headphones, along with mixing procedures and techniques that will help to land our mixes within five minutes of mastering…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"><h2 style="color: #cb0c5a;text-align: left;font-family:Source Sans Pro;font-weight:700;font-style:italic" class="vc_custom_heading" ><a href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-4">Next instalment: Mixing With Headphones 4</a></h2><div class="vc_empty_space"   style="height: 24px"><span class="vc_empty_space_inner"></span></div></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1698098589251 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p3"><strong><span class="s1">IMPEDANCE, POWER, SENSITIVITY &amp; SPL</span></strong></h4>
<p class="p3"><span class="s1">The headphones’ <i>sensitivity</i> tells us how much SPL they will generate for a given amount of power from the amplifier; more power <i>into</i> the headphones means more SPL <i>out of</i> the headphones. However, contrary to popular assumption, we cannot <i>force</i> power into headphones (or any other electrical circuit, for that matter). An amplifier’s power rating only tells us how much power it is able to <i>provide</i>; it is up to the <i>load</i> (speaker, headphones, whatever) to take the power it needs from the amplifier – up to the maximum the amplifier can provide. [Things start going wrong when the load tries to take more power than the amplifier can provide, which is like forcing one horse to pull a cart that requires two horses. More about that later…]</span></p>
<p class="p3"><span class="s1">When it comes to headphones, the power provided by the amplifier into the headphones is the product of the <i>voltage</i> at the output of the amplifier and the <i>current</i> drawn from the amplifier by the headphones – which is determined by their <i>impedance</i>. The relationship between <i>current</i>, <i>voltage</i> and <i>impedance</i> is shown in the formula below, which has been adapted from Ohm’s Law and modified to apply to headphones.</span></p>
<p class="p3"><span class="s1">I = V / Z</span></p>
<p class="p3"><span class="s1">Where V is the signal voltage at the output of the amplifier in Volts RMS, Z is the impedance of the headphones in Ohms, and I is the current that the headphones will draw from the headphone amplifier in Amps RMS.</span></p>
<p class="p3"><span class="s1">From this formula we can see that for any given voltage (V), reducing the impedance (Z) increases the current (I).</span></p>
<p class="p3"><span class="s1">The following formula shows how the voltage presented by the amplifier, and the resulting current drawn from the amplifier by the headphones, collectively determine the electrical power used by the headphones:</span></p>
<p class="p3"><span class="s1">P = V x I</span></p>
<p class="p3"><span class="s1">Where P is the power consumed by the headphones in Watts Continuous, V is the voltage at the output of the amplifier in Volts RMS, and I is the current drawn by the headphones in Amps RMS.</span></p>
<p class="p3"><span class="s1">From this formula we can see that there are two ways to increase the power consumed by the headphones: one is to increase the voltage, the other is to increase the current. With low voltage battery-powered devices there is a limit to how high we can increase the voltage (ie. the battery voltage is the maximum available without resorting to voltage multiplier circuits); beyond that, we have to increase the current. The only way we can increase the current under these circumstances is to lower the impedance of the headphones, because I = V / Z.</span></p>
<p class="p3"><span class="s1">As the formulae above show, for any given voltage, a lower headphone impedance draws more current and therefore takes more power from the amplifier. With a bit of mathematical substitution and transposition, we can summarise the above formulae and explanations with the following formula:</span></p>
<p class="p3"><span class="s1">P = V</span><span class="s3"><sup>2</sup></span><span class="s1"> / Z</span></p>
<p class="p3"><span class="s1">Where P is the power in Watts Continuous, V is the voltage in Volts RMS, and Z is the impedance in Ohms. This formula makes it clear to see that, for any given voltage (V) coming out of the amplifier, lowering the impedance of the headphones (Z) results in more power (P).</span></p>
<p class="p3"><span class="s1">The headphones’ <em>sensitivity</em> tells us how efficiently they will convert the power they take from the amplifier into SPL. There are two ways a headphone manufacturer can specify sensitivity. One way is to express it as SPL for a given power, such as 100dB/mW, which means 1mW (0.001W) of power will produce an SPL of 100dB. The other way is to express it as SPL for a given voltage, such as 100dB/V, which means if 1V RMS was applied to the headphones they would produce an SPL of 100dB (assuming the amplifier can provide sufficient current). If we know the appropriate electrical and decibel formulae we can easily convert between the two different types of sensitivity ratings; thankfully we don’t need to do that for the purposes of this discussion.</span></p>
<p class="p3"><span class="s1">In low voltage situations such as the headphone sockets in battery-powered devices, lower impedance and higher sensitivity are both desirable traits for headphones. The lower impedance results in more electrical power going <em>into</em> the headphones, and the higher sensitivity results in more SPL coming <em>out of</em> the headphones.</span></p>
<p class="p3"><span class="s1">Headphones with low sensitivity <em>and</em> high impedance are the most difficult to drive to useful SPLs when working with low voltage battery-powered devices. The result is, at best, insufficient SPL. It is also common in this situation to experience reduced low frequency reproduction (low frequencies contain the most energy and therefore require the most power, and the low-voltage headphone amplifier cannot provide it), causing us to compensate by adding too much low frequency energy to the mix. In extreme situations the sound from the headphones will feel ‘restrained’ and ‘compressed’, particularly in the low frequencies, and in worst-case scenarios it will be distorted. If you’re experiencing these situations when using a laptop’s headphone socket it means your headphones’ impedance is too high and/or their sensitivity is too low; in either case, the headphones require more power than the amplifier is able to provide. You’re going to need an external amplifier (eg. one that is built into an interface, or a dedicated headphone amplifier) or switch to headphones with higher sensitivity and/or lower impedance.</span></p>
<p class="p3"><span class="s1">Although there is no clearly defined threshold between low impedance and high impedance values for headphones, Apple (the most used brand of headphones in the USA at the time of this writing) provides a useful <span style="color: #333399;"><strong><a style="color: #333399;" href="https://support.apple.com/en-us/HT212856">reference</a></strong></span> based around a threshold of 150 ohms. They have been addressing the ‘high impedance headphone problem’ in their laptops and desktops since 2021, using an adaptive headphone amplifier circuit that senses the impedance of the connected headphones and adjusts the signal voltage accordingly (up to 1.25V RMS for impedances lower than 150 ohms, and up to 3V RMS for impedances above 150 ohms). Among other things, this <i>should</i> alleviate the need for an external headphone amplifier or interface when mixing on-the-go using high impedance headphones with Macbook Pro and Macbook Air laptops. That’s one less thing to carry around, connect, and balance on our laps. Winner, winner, chicken dinner&#8230;</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="987" height="619" src="https://www.audiotechnology.com/wp-content/uploads/2023/10/13b-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="13b-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/10/13b-pichi.jpg 987w, https://www.audiotechnology.com/wp-content/uploads/2023/10/13b-pichi-800x502.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/10/13b-pichi-768x482.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/10/13b-pichi-600x376.jpg 600w" sizes="(max-width: 987px) 100vw, 987px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_inner vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p><span class="s1">In a strange but reassuring twist, the company that led the way in removing headphone sockets from smart phones (where the physical freedom of a wireless connection makes sense for commuters) is leading the way with headphone amplifiers in their laptops and desktops (where the codec-free sound quality and zero latency of a wired connection makes sense for creators).</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div>
</section><p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">Mixing With Headphones 3</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.audiotechnology.com/tutorials/mixing-with-headphones-3/feed</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Mixing With Headphones 2</title>
		<link>https://www.audiotechnology.com/tutorials/mixing-with-headphones-2</link>
					<comments>https://www.audiotechnology.com/tutorials/mixing-with-headphones-2#comments</comments>
		
		<dc:creator><![CDATA[Greg Simmons]]></dc:creator>
		<pubDate>Fri, 18 Aug 2023 00:26:26 +0000</pubDate>
				<category><![CDATA[Issue 89]]></category>
		<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[greg simmons]]></category>
		<category><![CDATA[issue]]></category>
		<category><![CDATA[Mixing With Headphones]]></category>
		<category><![CDATA[Part 2]]></category>
		<category><![CDATA[tutorial]]></category>
		<guid isPermaLink="false">https://www.audiotechnology.com/?p=76894</guid>

					<description><![CDATA[<p> [...]</p>
<p><a class="btn btn-secondary understrap-read-more-link" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">Read More...</a></p>
<p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">Mixing With Headphones 2</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element  drop-cap" >
		<div class="wpb_wrapper">
			<p><span class="s1">In the <span style="color: #333399;"><strong><a style="color: #333399;" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-1">first instalment</a></strong></span> of this six-part series we explored the ascent of headphone listening, culminating in the current situation where headphone listening has supplanted speaker listening for the vast majority of music purchasing decisions and active music consumption. As audio professionals we would be foolish to underrate the significance of headphones in our mixing and monitoring decisions, but how do we reduce our reliance on an institutionalised technology – speakers – that has ultimately become irrelevant to the majority of the music consuming market? We can’t simply announce that we’re abandoning speakers for headphones, because there are significant differences between mixing through speakers and mixing through headphones.</span></p>
<p><span class="s1">Speaker reproduction brings a lot of changes to our mix; what we hear from the speakers is <em>not</em> what is coming out of the mixing console or DAW. The sound we hear at our monitoring position has had the frequency response and distortion of our speakers embedded into it, the acoustics of our listening room superimposed upon it, and possibly has comb-filtering introduced to it due to reflections off our work surfaces.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>If the tonality of your mixes does not translate acceptably to other speakers outside of your mixing room, it means your mixing decisions are being influenced by frequency response issues coming from your monitors and/or your mixing room’s acoustics. If your mixes have significantly different levels of reverberation and/or strange panning issues when heard on other speakers outside of your mixing room, it means your mixing decisions are being influenced by your mixing room’s reverberation and first order reflections. <span class="s1">If the tonality of some sounds within your mix (particularly the snare) changes significantly when you lean forward or backward from your normal mixing position it means you’ve got comb filtering off your work surface, and you’re wasting your time trying to find the right mixing position because it probably doesn’t exist…</span></p>
<p><span class="s1">Most contemporary monitor speakers provide acceptable performance within their intended bandwidth, which means the problems described above are <em>not</em> caused by your monitors, and therefore buying new monitors is <em>not</em> the solution (unless you’re living in that acoustic fantasy world where gut-shaking dance-floor subsonics come from shoebox-sized desktop speakers). T</span><span class="s1">he smart solution is to seek the advice of an acoustician. </span><span class="s1">Alternatively, you could stop relying on big monitor speakers and the acoustic treatments they require, and switch to mixing on headphones – which is, coincidentally, what this series is all about. So read on…</span></p>
<h4 class="p1"><strong><span class="s1">VARIABLE COMPENSATIONS</span></strong></h4>
<p class="p1"><span class="s1">Speaker listening introduces a lot of variables that don’t exist with headphone listening. Compensating for those variables with the tiny on-going tweaks and refinements that take place during the course of a mix – in response to cross-referencing with other speakers, changing seating posture, feedback from others inside the room but outside of the sweet spot, returning to the mix after a break, and so on – tends to make our mixes more resilient and thereby improves their translation across numerous playback systems.</span></p>
<p class="p1"><span class="s1">A mix made <em>only</em> on speakers will usually need very little tweaking to sound ‘right’ when heard through headphones, even though it might not take advantage of all that headphones have to offer. A mix made <em>only</em> on headphones can take advantage of all that headphones have to offer, but will often need considerable tweaking to sound ‘right’ when heard through speakers.</span></p>
<p class="p1"><span class="s1">How can we make ‘market relevant’ mixes that exploit headphone’s strengths without losing the ‘tweaking-for-the-variables’ benefits that speaker mixing introduces? We can start by understanding a) how human hearing works, b) what we hear and feel when listening to speakers, and c) what we <em>don’t</em> hear and feel when listening to headphones…</span></p>
<h4 class="p1"><span class="s1"><b>HOW DOES HUMAN HEARING WORK?</b></span></h4>
<p class="p1"><span class="s1">Human beings have two ears, one on either side of the head, to capture two slightly different versions of the same sound. The ear/brain system uses the differences between these two versions of the same sound to determine where that sound is coming from in a process called ‘localisation’.</span></p>
<p class="p1"><span class="s1">The illustration below shows a listener receiving sound information from a sound source located to the left of centre. There are three ‘difference’ mechanisms the ear/brain system uses to localise the sound source, and, for this example, they all occur because the right ear is further from the sound source than the left ear.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-6273" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-6273 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >…what we hear from the speakers is not what is coming out of the mixing console or DAW.</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-9816" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-9816 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="806" height="627" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎01-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎01-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎01-pichi.jpg 806w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎01-pichi-800x622.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎01-pichi-768x597.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎01-pichi-600x467.jpg 600w" sizes="(max-width: 806px) 100vw, 806px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">Firstly, the sound will arrive at the right ear a short time after it arrives at the left ear, creating an Interaural Time Difference (ITD) – which is sometimes referred to as an Interaural Phase Difference (IPD), particularly at lower frequencies where the wavelength is longer than the width of the average human head and therefore the time difference occurs within one cycle.</span></p>
<p class="p1"><span class="s1">Secondly, the signal arriving at the right ear has travelled further than the signal arriving at the left ear and will therefore have a lower SPL due to the Inverse Square Law. This creates an Interaural Amplitude Difference (IAD) – which is sometimes referred to as an Interaural Level Difference (ILD).</span></p>
<p class="p1"><span class="s1">Thirdly, because the signal at the right ear travels across the listener’s face and enters the right pinna from a different angle than it enters the left pinna, the signal arriving at the right ear will have a different frequency spectrum than the signal arriving at the left ear due to ‘acoustic shadowing’ of the head, hair absorption, skin reflections, diffraction across the face, and the numerous comb filters and cavity resonances introduced by the pinna. All of these result in an Interaural Spectral Difference (ISD).</span></p>
<p class="p1"><span class="s1">Collectively, the ITDs, IADs and ISDs are referred to as ‘HRTFs’ (Head Related Transfer Functions), because they represent the changes imposed on the signal as it passes around the listener’s head and into their ears. The ear/brain system uses the differences between the left and right HRTFs to determine where a sound is coming from, i.e. to localise it.</span></p>
<h4 class="p1"><span class="s1"><b>Loudness vs Frequency</b></span></h4>
<p class="p1"><span class="s1">An important quirk of human hearing is that its sensitivity to individual frequencies changes with the SPL. At lower SPLs we are less sensitive to low and high frequencies than we are to midrange frequencies, while at higher SPLs our sensitivity to the low and high frequencies increases significantly.</span></p>
<p class="p1"><span class="s1">This behaviour is shown in the graph below, which contains a number of ‘Equal Loudness Contours’. Each contour uses a 1kHz tone at a stated SPL as a reference, and shows how much SPL is required for other frequencies to be perceived as being ‘equally as loud’ as the 1kHz reference.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="672" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎02-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎02-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎02-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎02-pichi-600x594.jpg 600w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎02-pichi-100x100.jpg 100w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">Each contour is labelled with a Phon value, which represents the SPL of the 1kHz reference tone. For example, the 80 Phon contour shows the SPLs required for different frequencies to be perceived as being ‘equally as loud’ as a 1kHz tone that is being reproduced at 80dB SPL. As shown in the graph below, 125Hz will need an SPL of approximately 89dB to be perceived as being ‘equally as loud’ as 1kHz at 80dB SPL. Similarly, 8kHz would need an SPL of approximately 92dB to be perceived as being ‘equally as loud’ as 1kHz at 80dB SPL.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="671" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎03-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎03-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎03-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎03-pichi-600x593.jpg 600w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎03-pichi-100x100.jpg 100w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>To put it another way, let’s say we had three separate sine wave oscillators: one generating a 1kHz tone, one generating a 125Hz tone and one generating an 8kHz tone. If the 1kHz oscillator’s output was adjusted to provide an SPL of 80dB, the 125Hz oscillator’s output would need to be 9dB higher than the 1kHz oscillator to sound like it is ‘equally as loud’, and the 8kHz oscillator’s output would need be 12dB higher than the 1kHz oscillator to sound like it is ‘equally as loud’. So a 125Hz tone at 89dB SPL, a 1kHz tone at 80dB SPL and an 8kHz tone at 92dB SPL will all have ‘equal loudness’ – but those differences only apply when we’re on the 80 Phons curve (i.e. 1kHz at 80dB SPL). If we change the SPL of the 1kHz tone, the <em>differences</em> required for other frequencies to sound ‘equally as loud’ will also change, as seen by the differing shapes of the Equal Loudness Contours. If they were all the same shape we wouldn’t have to think about how our mix will translate to different playback levels…</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="724" height="567" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎04-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎04-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎04-pichi.jpg 724w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎04-pichi-600x470.jpg 600w" sizes="(max-width: 724px) 100vw, 724px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">The Equal Loudness Contours show us that as the SPL gets higher, the ear becomes more sensitive to low and high frequencies. This means that a mix made at a high monitoring level will sound lacking in low and high frequency energy when heard at a low monitoring level, and a mix made at a low monitoring level will have excessive low and high frequency energy when heard at a higher monitoring level. In other words, the frequency spectrum and tonal balance of our mixes is affected by the monitoring level used when mixing.</span></p>
<p class="p1"><span class="s1">The illustration below shows what happens if we mix at a high monitoring level but play back at a low monitoring level. The blue contour represents the balance of frequencies in a mix made at a high monitoring level of 100 Phons, and the red contour represents how much energy is needed for that mix to have the same perceived frequency balance (i.e. sound the same) when heard at a low monitoring level of 40 Phons. </span>Any frequencies on the blue contour that are <em>below</em> the red contour will be <em>quieter</em> than intended in the mix, and any frequencies on the blue contour that are above the red contour will be <em>louder</em> than intended in the mix.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="671" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/05-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="05-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/05-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/08/05-pichi-600x593.jpg 600w, https://www.audiotechnology.com/wp-content/uploads/2023/08/05-pichi-100x100.jpg 100w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>From the graph above we can see that a mix made at a high monitoring level will be seriously lacking in low and high frequency energy if heard at a low monitoring level. As a matter of interest, this is <em>exactly</em> why some hi-fi systems have a &#8216;loudness&#8217; button: it boosts the low and high frequencies in a way that looks very similar to the differences between the blue and red contours shown above, allowing the music to be heard at a very low level (to avoid waking up the family, for example) while still having a sufficient <em>perceived</em> balance of low and high frequencies.</p>
<p class="p1"><span class="s1">The same problem occurs the other way around, as shown below. </span><span class="s1">The blue contour represents the balance of frequencies in a mix made at a low monitoring level of 40 Phons, and the red contour represents how much energy is needed for that mix to have the same perceived frequency balance (i.e. sound the same) when replayed at a high monitoring level of 100 Phons. </span>Any frequencies on the blue contour that are <em>above</em> the red contour will be <em>louder</em> than intended in the mix, and any frequencies on the blue contour that are below the red contour will be <em>quieter</em> than intended in the mix. We can see that a mix made at a low monitoring level will have excessive low and high energy if replayed at a high monitoring level.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="717" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎06-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎06-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎06-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎06-pichi-600x634.jpg 600w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">Neither of the mixes shown above will translate well to different playback levels – even through the monitors they were mixed on – and both will require corrective EQ in mastering or perhaps even a re-mix, meaning more time and more cost. Mixing loud is proud, mixing quiet is polite, but in both cases you’re mixing on credit: it feels good now but you’ll be paying for it later because for everything else there’s mastering card…</span></p>
<p class="p1"><span class="s1">It’s also worth noting that over a long day of mixing our hearing mechanisms become tired, or ‘fatigued’, and this causes us to inadvertently turn up the monitoring level so that things continue to sound exciting. Although our hearing mechanisms suffer from fatigue the Equal Loudness contours remain the same, so by increasing the monitoring level we are inadvertently shifting our hearing ‘baseline’ up to a higher Equal Loudness contour – incurring all the problems that come with that. If you were to mix five songs over a 15 hour day in the studio, it would not be surprising to find that during playback the next day (with rested hearing) the first mix made the day before sounds good, but the last mix made the day before is considerably lacking in low and high frequencies. Why? Because the monitoring level was regularly increasing throughout the day that the mixes were made, so that the last mix was made on a very different equal loudness contour than the first mix.</span></p>
<p class="p1"><span class="s1">For these reasons, professional audio facilities calibrate all of their monitoring systems to a standard SPL, typically somewhere around 80 Phons (e.g. the monitoring volume control is adjusted so that a 1kHz tone at -20dBFS or 0dB VU on the stereo bus creates an SPL of 80dB at the monitoring position), which helps to maintain spectral consistency from mix to mix and within a range of playback levels from about 60 Phons to 100 Phons.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="705" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎07-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎07-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎07-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎07-pichi-600x623.jpg 600w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The illustration above shows how a mix made at a monitoring level of 80 Phons (blue) translates to higher monitoring levels (100 Phons, green) and lower monitoring levels (60 Phons, red). In both cases, the perceived differences above 500Hz are insignificant. Below 500Hz, when the 80 Phons mix is replayed at 100 Phons there will be a gradual increase in perceived low frequency energy (rising to +10dB at 31.5Hz), and when the 80 Phons mix is replayed at 60 Phons there will be a gradual decrease in perceived low frequency energy (falling to -10dB at 31.5Hz). These changes are not ideal, but they’re acceptable considering they occur over a range of 40dB (from 60 Phons to 100 Phons) and retain very high consistency above 500Hz throughout that range.</p>
<p class="p1"><span class="s1">80 Phons or thereabouts is also a good level for minimising short-term hearing fatigue and long-term hearing damage, and acts as a warning sign: if the calibrated monitoring volume is not feeling loud enough during a long session, it means either a) the engineer’s hearing is becoming fatigued and it’s time to take a break, or b) the metered level of the mix is lower than the chosen calibration level (typically averaging -20dBFS or 0dB VU) and should be adjusted or compensated for accordingly.</span></p>
<p class="p1"><span class="s1">Using a calibrated monitoring level streamlines the entire process from recording to mastering, improves ‘mix confidence’ and translation, and removes the dreaded ‘cold light of day’ disappointment, i.e. the mix that sounded amazing at the end of a long day in the studio sounds underwhelming and disappointing when heard the next morning through fresh ears and at a more civilised (i.e. lower) playback level.</span></p>
<p><span class="s1">We should always be aware of our monitoring levels, regardless of whether we’re using speakers or headphones. More about that in the final instalment of this series…</span></p>
<p class="p1"><span class="s1">As a matter of interest, turning the Equal Loudness Contours upside down, as shown below, allows them to be considered as statistically averaged frequency response graphs of the human ear. This makes it easier to see how the frequency sensitivity of human hearing changes with the SPL.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="679" height="672" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎08-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎08-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎08-pichi.jpg 679w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎08-pichi-600x594.jpg 600w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎08-pichi-100x100.jpg 100w" sizes="(max-width: 679px) 100vw, 679px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4 class="p1"><span class="s1"><b>WHAT DO WE HEAR WITH SPEAKERS?</b></span></h4>
<p class="p1"><span class="s1">The illustration below shows the correct configuration for stereo reproduction through speakers, where the acoustic centres of the monitor speakers form two points of an equilateral triangle, and the listener is aligned with the third point. The stereo image is therefore capable of extending across 60° in front of the listener (±30° either side of centre).</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="803" height="634" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎09-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎09-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎09-pichi.jpg 803w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎09-pichi-800x632.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎09-pichi-768x606.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎09-pichi-600x474.jpg 600w" sizes="(max-width: 803px) 100vw, 803px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1595296124081 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner vc_custom_1595990674300"><div class="wpb_wrapper"><div id="bsa-block-970--450" class="bsaProContainerNew bsaProContainer-86 bsa-block-970--450 bsa-pro-col-1" style="display: block !important"><div class="bsaProItems bsaGridNoGutter " style="background-color:"><div class="bsaProItem bsaReset" data-animation="fadeIn" style=""><div class="bsaProItemInner" style="background-color:"><div class="bsaProItemInner__thumb"><div class="bsaProAnimateThumb" style="display: block;margin: auto;"><a class="bsaProItem__url" href="https://www.audiotechnology.com/advertise?sid=86&bsa_pro_id=739&bsa_pro_url=1" target="_blank"><div class="bsaProItemInner__img" style="background-image: url(&#39;https://www.audiotechnology.com/wp-content/uploads/bsa-pro-upload/1673238661-RCPII_Launch_Static_PA-pichi.jpg&#39;)"></div></a></div></div></div></div></div></div><script>
			(function($){
				function bsaProResize() {
					var sid = "86";
					var object = $(".bsaProContainer-" + sid);
					var imageThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__img");
					var animateThumb = $(".bsaProContainer-" + sid + " .bsaProAnimateThumb");
					var innerThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__thumb");
					var parentWidth = "970";
					var parentHeight = "450";
					var objectWidth = object.parent().outerWidth();
//					var objectWidth = object.width();
					if ( objectWidth <= parentWidth ) {
						var scale = objectWidth / parentWidth;
						if ( objectWidth > 0 && objectWidth !== 100 && scale > 0 ) {
							animateThumb.height(parentHeight * scale);
							innerThumb.height(parentHeight * scale);
							imageThumb.height(parentHeight * scale);
							object.height(parentHeight * scale);
						} else {
							animateThumb.height(parentHeight);
							innerThumb.height(parentHeight);
							imageThumb.height(parentHeight);
							object.height(parentHeight);
						}
					} else {
						animateThumb.height(parentHeight);
						innerThumb.height(parentHeight);
						imageThumb.height(parentHeight);
						object.height(parentHeight);
					}
				}
				$(document).ready(function(){
					bsaProResize();
					$(window).resize(function(){
						bsaProResize();
					});
				});
			})(jQuery);
		</script>						<script>
							(function ($) {
								var bsaProContainer = $('.bsaProContainer-86');
								var number_show_ads = "0";
								var number_hide_ads = "0";
								if ( number_show_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeIn(); }, number_show_ads * 1000);
								}
								if ( number_hide_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeOut(); }, number_hide_ads * 1000);
								}
							})(jQuery);
						</script>
						</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">The left speaker provides the IADs and ITDs that are embedded into the mix for the left ear, and the right speaker provides the IADs and ITDs that are embedded into the mix for the right ear. A listener in the proper monitoring position receives these signals in the correct relationships to re-construct the stereo image(s) contained within the recording.</span></p>
<h4 class="p1"><strong><span class="s1">Phantom Images</span></strong></h4>
<p class="p1"><span class="s1">Sound sources are easily localised anywhere in the space between the speakers in a process known as <em>phantom imaging</em>. In the example shown below, the sound source is perceived as coming from the left of centre but there is no sound source in that location. The localised sound is, therefore, a <em>phantom image</em>. When you can hear a sound source where you cannot see one in a stereo image (e.g. a vocal directly in the centre of a stereo system), you are hearing a phantom image. The ability to create a phantom image is the very core of creating a stereo soundstage; without it we’d just have sounds coming from hard left and hard right.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="803" height="592" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎10-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎10-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎10-pichi.jpg 803w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎10-pichi-800x590.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎10-pichi-768x566.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎10-pichi-600x442.jpg 600w" sizes="(max-width: 803px) 100vw, 803px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">[With the right microphone techniques and/or processing, we can make a sound source appear right in front of our noses, a long way back behind the speakers, or even extend out beyond the sides of the speakers. This type of illusion is also easily created with binaural recordings played through headphones, but that only works when heard through headphones.]</span></p>
<h4 class="p1"><span class="s1"><b>Interaural Crosstalk &amp; More…</b></span></h4>
<p class="p1"><span class="s1">Because there is no acoustic isolation between the two speakers, some of the sound intended for the left ear will reach the right ear, and vice versa. This creates a form of interaural crosstalk.</span></p>
<p class="p1"><span class="s1">Every stereo mix will contain IADs due to the use of the pan pot and/or panning effects, and it will also have IADs if it has any stereo tracks that were recorded by a coincident (e.g. XY) or near-coincident (e.g. ORTF) pair of microphones. Likewise, every stereo mix will contain ITDs due to the use of stereo time-based effects processors (reverb, delay, etc.), and it will also have ITDs if it has any stereo tracks that were recorded by near-coincident (e.g. ORTF) and/or widely spaced microphone pairs (e.g. AB, drum overheads).</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="803" height="592" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/11-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="11-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/11-pichi.jpg 803w, https://www.audiotechnology.com/wp-content/uploads/2023/08/11-pichi-800x590.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/11-pichi-768x566.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/11-pichi-600x442.jpg 600w" sizes="(max-width: 803px) 100vw, 803px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">Interaural crosstalk allows the left and right channel IADs and ITDs to blend in the air, reducing the audibility of the differences between them and thereby reducing the perceived width of the stereo image. It can also cause perceived comb filtering if the mix itself contains any delays of 25ms (0.025s) duration or less between the channels.</span></p>
<h4 class="p1"><span class="s1"><b>The Speakers &amp; The Room</b></span></h4>
<p class="p1"><span class="s1">Another challenge arises because each speaker remains a distinct sound source and therefore creates its own ITDs and IADs, ultimately telling the ear/brain system that there are really only two sound sources, which conflict with the ITDs and IADs embedded into the mix. These side-effects, and the imaging changes caused by interaural crosstalk described above, are inherently compensated for when mixing through speakers because each signal’s level and panning is adjusted as required to sound right.</span></p>
<p class="p1"><span class="s1">Listening through speakers also introduces the possibility of first order reflections from nearby surfaces in the listening space that will be superimposed over the playback and ultimately confuse any spatial information from the recording itself – such problems can greatly interfere with panning decisions. Most listening environments will also have some reverberation of their own, which ultimately affects the levels of reverberation we add to the mix (more about that below). </span><span class="s1">Room reflections and reverberation are addressed with acoustic treatment in a studio control room or in a mixing room, but can be a problem with general listening through speakers outside of the studio environment.</span></p>
<p class="p1"><span class="s1">If we introduce enough spatial information (reverberation, etc.) into our mixes the sonic presence of the speakers and the room becomes insignificant – assuming the room is acoustically acceptable to begin with. </span><span class="s1">A good mix transcends the speakers and the room, hopefully invoking a ‘willing suspension of disbelief’ – that feeling when a mix somehow transports you to another place, dimension or world where you cannot see the man behind the curtain.</span></p>
<h4 class="p1"><span class="s1"><b>Visceral Impact</b></span></h4>
<p class="p1"><span class="s1">The word ‘viscera’ refers to the soft internal organs of the human body: the lungs, heart, digestive organs, reproductive organs and so on. Therefore, ‘visceral impact’ refers to the impact the sound or mix has on the soft internal organs of our bodies; in other words, how we physically ‘feel’ the sound. Low frequency sounds have the longest wavelengths and generally the highest energy of all sounds in a mix, and therefore provide the most visceral impact.</span></p>
<p class="p1"><span class="s1">It is often said that low frequencies stimulate the adrenal glands (located above the kidneys), causing them to generate the hormone ‘adrenaline’ which is responsible for making us want to move and dance when listening to music. However, there is little research to substantiate this. If adrenaline due to visceral impact was a factor required for dancing, then silent discos, silent raves and similar events – which are all based on people dancing to music heard through headphones – would not exist.</span></p>
<h4 class="p1"><span class="s1"><b>WHAT DON’T WE HEAR WITH HEADPHONES?</b></span></h4>
<p class="p1"><span class="s1">Headphone listening differs to speaker reproduction much more significantly than most people assume. There are no listening room acoustics to alter the frequency response, there is no interaural crosstalk to mess with the stereo imaging and introduce acoustic comb filtering in the space between the speakers, and there is no visceral impact to add an enhanced/exaggerated sense of excitement.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">What we hear when mixing with headphones is the stereo mix directly from our DAW, without any external influences other than the frequency response and distortions of the headphones and the amplifier that is driving them. A good pair of headphones can consistently and reliably deliver frequencies that extend considerably above and below the range of human hearing. It therefore seems logical that a pair of low distortion headphones with a perfectly flat frequency response would provide the ultimate audio reference. Right? The answer is ‘yes’ for listening, but ‘no’ for mixing. Why not?</span></p>
<p class="p1"><span class="s1">A mix done on speakers contains compensations for those external influences as they existed in the mixing room (frequency response, room acoustics, interaural crosstalk, visceral impact, etc.), and those compensations ultimately make the mix more <em>resilient</em> – giving it better translation through a wider range of playback systems. Headphone mixing does not have those external influences and therefore our headphone mixes do not compensate or allow for them, resulting in less resilient and more ‘headphone specific’ mixes that do not translate as well to reproduction through speakers and sometimes even through other types of headphones.</span></p>
<h4 class="p1"><span class="s1"><b>ADDING RESILIENCE</b></span></h4>
<p class="p1"><span class="s1">What can we do to incorporate those valuable ‘speaker mixing’ compensations into our headphone mixing process and thereby make our headphone mixes more resilient? Let’s start by looking at what headphone manufacturers are doing with frequency responses, then we’ll look at trickier ‘hands on’ mixing problems like making sense of panning in headphones, establishing a reverberation reference when there is no mixing room, and anticipating problems that might be introduced by interaural crosstalk that doesn&#8217;t occur when monitoring in headphones.</span></p>
<h4 class="p1"><span class="s1"><b>Frequency Response &amp; Voicing</b></span></h4>
<p class="p1"><span class="s1">One of the goals of speaker manufacturers, regardless of whether their products are intended for professional or consumer use, is to create speakers with a relatively flat frequency response from 20Hz to 20kHz. Most studio monitors include their frequency response graph in the documentation that comes with them; it’s rarely a perfectly flat line but if the deviations are gradual and remain within about ±2dB throughout the intended bandwidth the monitors are considered to be acceptable and we can learn to work with them. The illustration below shows the theoretical flat response (from 20Hz to 20kHz) that most speaker manufacturers aspire to (dark red), and the ±2dB window of deviation that is generally considered acceptable (light red).</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-6856" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-6856 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >A good pair of headphones can consistently and reliably deliver frequencies that extend above and below the range of human hearing.</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-3621" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-3621 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="875" height="534" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎12-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎12-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎12-pichi.jpg 875w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎12-pichi-800x488.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎12-pichi-768x469.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎12-pichi-600x366.jpg 600w" sizes="(max-width: 875px) 100vw, 875px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">The dominance of speaker listening prior to the ascent of headphone listening means that our impression of what a flat frequency response <em>sounds like</em> has been skewed by the contribution of the listening room acoustics – to the point that, in comparison, a pair of headphones with a flat frequency response will sound excessively bright while also lacking in low frequency energy. In comparison to speakers, the sound from the headphones does not get the high frequency attenuation that occurs as sound passes through the air along with the absorption by soft furnishings in the room (hence the excessive brightness), and it does not get the low frequency enhancement from the listening room’s resonant modes (hence the lack of low frequencies).</span></p>
<p class="p1"><span class="s1">Due to these differences, contemporary headphones are not designed to have a flat frequency response. Rather, they’re ‘voiced’ (i.e. their frequency response has been moved away from the theoretical ideal of ‘flat’) so that they sound like speakers with a flat frequency response. Hence we see headphone marketeers using descriptive phrases like ‘neutral tonality’ and ‘voiced to sound natural’, rather than showing frequency response graphs – because such graphs would alarm anybody who expected to see a perfectly straight line.</span></p>
<p class="p1"><span class="s1">Here’s the concept: start with a speaker with a flat frequency response, place a measurement microphone in front of it at the distance a typical listener would be, run a frequency sweep through the speaker, and capture it with the microphone. The result is the ‘flat’ frequency response as it is reproduced by the speaker and captured at the listening position. Build that frequency response into the headphones and they <em>should</em> sound like speakers with a flat frequency response.</span></p>
<p class="p1"><span class="s1">This seems simple enough, but it raises questions about the kind of room that should be used for such measurements because the room acoustics influence the sound captured by the microphone.</span></p>
<p class="p1"><span class="s1">Throughout the 1970s it was standard practice to use an anechoic chamber, thereby creating a ‘free-field’ environment where the only sound to reach the microphone was the direct sound from the speaker with no contribution from the room itself (i.e. no resonances, no reflections and no reverberation) other than the loss of high frequencies over distance through the air. This was known as ‘free-field equalisation’ and, not surprisingly, headphones that use ‘free-field equalisation’ sound rather like listening to a speaker with a flat frequency response placed in an anechoic chamber. It was an improvement over the sound of headphones with the theoretically perfect flat response, but it still did not correlate well with speaker listening because nobody listens to speakers in an anechoic environment. The headphone designers had the right idea, but there was more work to be done…</span></p>
<p class="p1"><span class="s1">In an attempt to create something that correlated better with speakers, the 1980s saw the introduction of ‘diffuse-field equalisation’ – a method that is still popular. A ‘point-source’ loudspeaker (i.e. a speaker that radiates frequencies equally well in all directions), with a flat frequency response, is placed in a reverberation chamber rather than an anechoic chamber. A frequency sweep is reproduced by the speaker and captured by a dummy head placed at a sufficient distance to ensure it is in the diffuse field (i.e. where the room’s reverberation is the dominant sound). This measurement provides the frequency response the headphones are voiced to reproduce. Many critically-acclaimed and widely-adopted headphones conform to the diffuse-field equalisation curve.</span></p>
<p class="p1"><span class="s1">More recently, tests by Dr Sean Olive and others working for Harman International (parent company of AKG, Crown, dbx, JBL, Lexicon, Soundcraft, Studer et al) replaced the free-field environment and the diffuse-field environment with what was generally considered to be a good sounding listening room. The results were then combined with the results of tests in which numerous listeners were asked to audition and rate their preferences for numerous headphones with different frequency responses. These tests and measurements resulted in the Harman target curve (aka the ‘Harman Curve’), as shown below:</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="875" height="534" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎13-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎13-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎13-pichi.jpg 875w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎13-pichi-800x488.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎13-pichi-768x469.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎13-pichi-600x366.jpg 600w" sizes="(max-width: 875px) 100vw, 875px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">The Harman Curve is hardly ‘flat’, but it significantly closes the tonality gap between headphone and speaker reproduction, and provides very good tonal translation between them. Neumann seem to have taken this approach one step further by using their KH series monitors as the reference speakers for voicing their NDH headphones, resulting in headphones that correlate remarkably well with their monitors and minimise the ‘line-of-best-fit’ compromising that occurs when an instrument in the mix is <em>too</em> loud on one set of monitors but <em>too</em> soft on another set of monitors. This level of correlation between monitor speakers and headphones is something that very few other manufacturers can offer because most headphone manufacturers don’t make studio monitors, and most studio monitor manufacturers don’t make headphones.</span></p>
<p class="p1"><span class="s1">Mixing on headphones that are voiced this way (i.e. to sound like speakers with a flat response in a good room) will usually result in better translation to speaker playback in terms of tonality, and solves one of the major differences between speaker mixes and headphone mixes. However, it does not resolve spatial disparities such as panning and reverberation levels, and it doesn&#8217;t counter the effects of interaural crosstalk. Solving and/or compensating for those problems requires a more strategic approach…</span></p>
<h4 class="p1"><span class="s1"><b>Panning Compensation</b></span></h4>
<p class="p1"><span class="s1">As discussed earlier, speaker listening creates a stereo image that can be up to 60° wide (±30°, with 0° being directly in front of the listener). A sound panned hard left should appear at 30° to the left of centre (i.e. coming directly from the left studio monitor), and a sound panned hard right should appear at 30° to the right of centre (i.e. coming directly from the right studio monitor).</span></p>
<p class="p1"><span class="s1">In comparison, headphone listening creates a stereo image that can be up to 180° wide (±90°), depending marginally on the placement of the drivers within the ear cups. A sound that is panned hard left will be 90° to the left of the centre (coming directly from the left ear cup) and a sound that is panned hard right will be 90° to the right of centre (coming directly from the right ear cup).</span></p>
<p class="p1"><span class="s1">The difference between the widths of their stereo soundstages can be represented as a ratio of 180:60, or 3:1, meaning the soundstage width of a headphone mix needs to be approximately 3x wider than it is expected to be when heard through speakers. This is an important consideration when mixing on headphones, because a sound that is panned to 45° to the left of centre when heard in headphones will be heard at 45°/3 = 15° to the left when heard through speakers.</span></p>
<p class="p1"><span class="s1">Although headphone monitoring exaggerates panning when compared to speaker monitoring, both monitoring systems downplay panning positions when compared to the visual placement indicated by the pan pot – which rotates through a range of 270° (±135°). The panning ratios between the pan pot, headphones and studio monitors are therefore 270:180:60, or 4.5:3:1. From the point of view of a mixing engineer sitting in the stereo sweet spot, a hard left pan will be seen at 135° to the left on the pan pot, but will be heard at 90° to the left on headphones and heard at 30° to the left through studio monitors. (To add the confusion, it will be shown at 45° to the left on a <em>goniometer</em>, but more about that later…)</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="803" height="592" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎14-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎14-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎14-pichi.jpg 803w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎14-pichi-800x590.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎14-pichi-768x566.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎14-pichi-600x442.jpg 600w" sizes="(max-width: 803px) 100vw, 803px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">For any given headphones, it is always good procedure to start a mix by establishing the location of five reference points across the stereo soundstage – hard left, mid-left, centre, mid-right, hard right – and reminding ourselves of where those locations appear on the pan pot. This is especially important if we’re new to headphone mixing and intuitively pan sounds <em>by ear</em> to the same locations we’re used to hearing them when mixing through speakers. This will result in a very narrow soundstage when heard in speakers because speaker playback reduces the width of a headphone mix by a factor of 3:1, as mentioned earlier.</span></p>
<p class="p1"><span class="s1">Remember, the stereo soundstage on headphones is approximately 3x wider than it is on speakers. So, if you want a sound to appear 15° to the left (i.e. mid-left) when heard through speakers, you have to pan it 45° to the left (i.e. 3 x 15°) if mixing on headphones.</span></p>
<p class="p1"><span class="s1">One interesting aspect of headphone mixing related to panning is the location of ‘centre’. Depending on the headphones and the listener, ‘centre’ often appears to be inside the listener’s head or directly above it. To overcome this problem, many contemporary headphone designs use angled drivers and/or angled ear pads to place the drivers slightly forward of the ear canal. This allows the pinnae to create subtle ISDs that place the soundstage in front of the listener, at the possible expense of a minor reduction in the width of the stereo soundstage.</span></p>
<h4 class="p1"><span class="s1"><b>Reverberation Compensation</b></span></h4>
<p class="p1"><span class="s1">Every well-designed mixing room conforms to a reverberation curve that ensures a level of background reverberation representing an idealised real-world listening environment. Among other things, this creates a ‘reverberation reference’ to balance the levels and times of our reverberation effects against, ensuring they are not significantly lower, higher, longer or shorter than intended when the mix is taken out of the room and played in the real world.</span></p>
<p class="p1"><span class="s1">It is commonly believed that if we mix in a room that does not have enough reverberation of its own, we will add too much reverberation to our mixes to compensate. The same thinking implies that if we mix in a room that has too much reverberation of its own we won’t add enough reverberation to our mixes. Although this appears to make sense, it ignores the ear/brain’s remarkable ability to distinguish between the reverberation of the mixing room and the reverberation added to the mix. The mixing room’s reverberation is not necessarily heard as part of the mix’s reverberation, but it does provide a masking effect that the reverberation in our mixes needs to overcome. The result is as follows…</span></p>
<p class="p1"><span class="s1">If we mix through speakers in a room that has a particularly low reverberation reference, the levels of the reverberation effects we add to the mix might not be high enough because they are easily heard over the room’s reverberation reference. Likewise, the reverberation times we choose might be too short because the room’s low reverberation reference makes it easier to hear the added reverberation tails for longer.</span></p>
<p class="p1"><span class="s1">Similarly, if we mix through speakers in a room that has a particularly high reverberation reference, the levels of the reverberation effects we add to our mix might be too high in order to be heard over the room’s high reverberation reference. Likewise, the reverberation times we choose might be too long because the room’s high reverberation reference makes it harder to hear the added reverberation tails for the desired time.</span></p>
<p class="p1"><span class="s1">The illustration below illustrates this problem. The upper graph shows reverberation (green) being added to a mix in three different mixing rooms: one with a very low reverberation reference, one with a good reverberation reference, and one with a very high reverberation reference. </span><span class="s1">Each room&#8217;s reverberation reference level is shown in grey, and in each case the mix reverberation (green) has been added at an appropriate level and duration to achieve the same perceived level and duration in each room – represented by the green area above the grey areas.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<div class="vc_single_image-wrapper   vc_box_border_grey"><img width="877" height="636" src="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎15-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="‎15-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/08/‎15-pichi.jpg 877w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎15-pichi-800x580.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎15-pichi-768x557.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/08/‎15-pichi-600x435.jpg 600w" sizes="(max-width: 877px) 100vw, 877px" /></div>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			
		</div>
	</div>
</div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The lower graph shows what happens when each mix is replayed in a room with a good reverberation reference. The first mix&#8217;s reverberation is inaudible, resulting in a very &#8216;dry&#8217; mix. The second mix is consistent with the upper graph. The third mix&#8217;s reverberation is too high, resulting in a very &#8216;wet&#8217; mix.</p>
<p class="p1"><span class="s1">Headphone listening is not affected by the mixing room’s acoustics and therefore has no reverberation reference, making it harder to judge and set reverberation levels and times in a way that translates well to speakers. A mix made entirely through speakers in an acoustically-designed mixing room translates to headphones with no surprises in reverberation levels or times because the reverberation effects have been balanced against the room’s reverberation reference. However, a mix made entirely through headphones could have surprising changes of reverberation levels when heard through speakers because it has been made with no reverberation reference. What sounds ‘just right’ when mixed in headphones is often too low when heard through speakers.</span></p>
<p class="p1"><span class="s1">We can solve the reverberation reference problem when mixing with headphones by using a reference track as a reality check, which we’ll talk more about in the next instalment.</span></p>
<h4 class="p1"><span class="s1"><b>Interaural Crosstalk Compensation</b></span></h4>
<p class="p1"><span class="s1">When mixing with headphones we cannot predict what changes will happen to our mix when the left and right signals combine together in the air and at the ears. As mentioned earlier, this is known as ‘interaural crosstalk’ and is an unavoidable part of speaker monitoring: some of the left channel’s signal <em>will</em> enter the right ear, and some of the right channel’s signal <em>will</em> enter the left ear.</span></p>
<p class="p1"><span class="s1">Interaural crosstalk can affect the perceived levels and panning of individual instruments in our stereo mix, it can introduce comb filtering, and it can alter the perceived level of reverberation and similar stereo time-based effects.</span></p>
<p class="p1"><span class="s1">The easiest way to check for the effects of interaural crosstalk when mixing with headphones is to check the mix in mono. This creates a ‘worst case’ crosstalk scenario (i.e. both channels are completely added together) that will exaggerate any level changes or comb filtering issues that might occur when the mix is heard through speakers. Subtle changes in individual signal levels within the balance are to be expected, but can also be indicators of hidden weaknesses in the mix that are worth addressing and fine-tuning. For example, sounds in the stereo mix that become too loud or too soft when monitored in mono are probably not at the right level in the stereo mix, and should be adjusted accordingly. A headphone mix that sounds acceptable when monitored in stereo <em>and</em> acceptable when monitored in mono stands a good chance of sounding acceptable when heard through speakers, too.</span></p>
<p class="p1"><span class="s1">Another useful tool for revealing potential interaural crosstalk problems in a headphone mix is the <em>goniometer</em>, also known as a correlation meter – a popular prop in old science fiction movies. More about that useful tool in the next instalment…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"><h2 style="color: #ddb41c;text-align: left;font-family:Source Sans Pro;font-weight:900;font-style:italic" class="vc_custom_heading" ><a href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-3">Next instalment: Useful tools for mixing on headphones.</a></h2></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div>
</section><p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">Mixing With Headphones 2</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.audiotechnology.com/tutorials/mixing-with-headphones-2/feed</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>Mixing With Headphones 1</title>
		<link>https://www.audiotechnology.com/tutorials/mixing-with-headphones-1</link>
					<comments>https://www.audiotechnology.com/tutorials/mixing-with-headphones-1#comments</comments>
		
		<dc:creator><![CDATA[Greg Simmons]]></dc:creator>
		<pubDate>Wed, 16 Aug 2023 01:55:12 +0000</pubDate>
				<category><![CDATA[Issue 89]]></category>
		<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[greg simmons]]></category>
		<category><![CDATA[issue]]></category>
		<category><![CDATA[Mixing With Headphones]]></category>
		<category><![CDATA[Part 1]]></category>
		<category><![CDATA[tutorial]]></category>
		<guid isPermaLink="false">https://www.audiotechnology.com/?p=76892</guid>

					<description><![CDATA[<p> [...]</p>
<p><a class="btn btn-secondary understrap-read-more-link" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-1">Read More...</a></p>
<p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-1">Mixing With Headphones 1</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element  drop-cap" >
		<div class="wpb_wrapper">
			<p><span class="s1">A dozen lifetimes ago – when sound was only as malleable as plastic tape – I spent my days working with resistors and capacitors, and my evenings playing with tape recorders and analogue synths. One of the technicians I was apprenticed to, a proto hi-fi buff aware of my sonic inclinations, handed me a cassette. “Listen in headphones, with your eyes closed.” That night I donned my Pioneer SE305s, pressed play and closed my eyes. A box of matches, shaken like a maraca, circled around me before passing over my head, under my chin, and stopping at the tip of my nose. I knew there wasn’t <i>really</i> a box of matches in front of me, but with my eyes closed this ‘advanced’ audio technology was indistinguishable from magic.</span></p>
<p class="p1"><span class="s1">Eager to add this illusory effect to my electronic soundscapes, I soon learnt that it was a ‘binaural recording’ made with a ‘dummy head’: a life-sized model of a human head with a microphone mounted in each ear to capture the left and right channel signals specifically as they are heard by each ear. The illusion relied on two things. First, capturing the signal received by each ear along with its embedded <i>Head Related Transfer Functions</i> (HRTFs) – which are the changes imposed upon the sound as it passes across the face, around the head, and navigates the pinna (aka ‘ear flap’ or ‘auricle’) before entering the ear canal. Second, the signal from the left side microphone must go to the left ear <i>only</i>, and the signal from the right side microphone must go to the right ear <i>only</i>. The ear/brain system uses the differences between each ear’s HRTFs to determine the location of the sound, therefore keeping the two channels isolated is necessary for the binaural effect to work.</span></p>
<p class="p1"><span class="s1">HRTFs vary from person to person depending on the size and shape of their head and their pinnae, meaning binaural recordings are more immersive to some people than others. The dummy head’s dimensions were averaged over a lot of different head and pinnae sizes and shapes, and captured left and right channel HRTFs with sufficient differences between them to fool most people – including me. However, playing the matchbox illusion through speakers was beyond disappointing. The illusion collapsed into the space in front of me, there were numerous instances of comb-filtering as the matchbox moved around within that collapsed space, and there were various imaging anomalies as the left and right channel signals and their HRTFs combined in the air – minimising the differences between them and confusing the ears rather than fooling them. In other words, the matchbox illusion <i>only</i> worked in headphones – they were the smoke and mirrors, and without them I could not ignore the man behind the curtain.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">In those days nobody listened to headphones except audio pros, musicians in studios, Dads enjoying their stereograms without waking up the kids, and old men listening to race calls through mono earbuds jacked into pocket radios. Oh, and that weirdo standing in the train door fingering an air guitar while wearing one of those newfangled ‘Walkmans’ (a fad <i>we all knew</i> would never last), completely oblivious to the silent audience hiding behind walls of newspapers and magazines. Headphones were definitely not fashion accessories, and anyone wearing them in public looked ridiculous.</span></p>
<p class="p1"><span class="s1">Disillusioned, I abandoned my plan of imposing artificial HRTFs onto my electronic soundscapes to create immersive binaural illusions. Those illusions only worked in headphones, and <em>nobody</em> listened in headphones…</span></p>
<p class="p1"><span class="s1">A dozen lifetimes later and, thanks to the proof-of-concept provided by Sony’s Walkman and refined by Apple’s double-whammy iPod/iTunes combo, <i>everybody</i> is listening in headphones. Oh, except for that pencil-clutching weirdo in the train door immersed in the pages of one of those newfangled ‘journals’ (a diary by any other name is still a diary), device-less and oblivious to the rows of headphoned performers spot-lit by tiny screens while silently fingering air guitars, conducting orchestras with finger batons, striking out at knee drums and air cymbals, or navigating app-worlds that have artificial HRTFs imposed onto their electronic soundscapes to create immersive binaural illusions.</span></p>
<p class="p1"><span class="s1">The cynical nostalgia evoked by the ‘nobody talks to anybody any more’ memes and tropes would have you believe that in the days before mobile devices, every train carriage, every bus and every waiting room was filled with strangers striking up genial conversations and filling the air with chatter. Atavistic nonsense! Before mobile devices people isolated themselves with newspapers, magazines and window seats, intentionally filling the air with the same lack of chatter as they do now. So slip on your headphones and forget about the negative-calorie small talk that Luddites cling to like Replicants embracing implanted memories – because it never really happened. Air boom, air tish…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-2662" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-2662 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >In those days nobody listened to headphones…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-1114" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-1114 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1595296124081 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner vc_custom_1595990674300"><div class="wpb_wrapper"><div id="bsa-block-970--450" class="bsaProContainerNew bsaProContainer-86 bsa-block-970--450 bsa-pro-col-1" style="display: block !important"><div class="bsaProItems bsaGridNoGutter " style="background-color:"><div class="bsaProItem bsaReset" data-animation="fadeIn" style=""><div class="bsaProItemInner" style="background-color:"><div class="bsaProItemInner__thumb"><div class="bsaProAnimateThumb" style="display: block;margin: auto;"><a class="bsaProItem__url" href="https://www.audiotechnology.com/advertise?sid=86&bsa_pro_id=828&bsa_pro_url=1" target="_blank"><div class="bsaProItemInner__img" style="background-image: url(&#39;https://www.audiotechnology.com/wp-content/uploads/bsa-pro-upload/1691035019-Australis_LAB GRUPPEN_DA-pichi.jpg&#39;)"></div></a></div></div></div></div></div></div><script>
			(function($){
				function bsaProResize() {
					var sid = "86";
					var object = $(".bsaProContainer-" + sid);
					var imageThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__img");
					var animateThumb = $(".bsaProContainer-" + sid + " .bsaProAnimateThumb");
					var innerThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__thumb");
					var parentWidth = "970";
					var parentHeight = "450";
					var objectWidth = object.parent().outerWidth();
//					var objectWidth = object.width();
					if ( objectWidth <= parentWidth ) {
						var scale = objectWidth / parentWidth;
						if ( objectWidth > 0 && objectWidth !== 100 && scale > 0 ) {
							animateThumb.height(parentHeight * scale);
							innerThumb.height(parentHeight * scale);
							imageThumb.height(parentHeight * scale);
							object.height(parentHeight * scale);
						} else {
							animateThumb.height(parentHeight);
							innerThumb.height(parentHeight);
							imageThumb.height(parentHeight);
							object.height(parentHeight);
						}
					} else {
						animateThumb.height(parentHeight);
						innerThumb.height(parentHeight);
						imageThumb.height(parentHeight);
						object.height(parentHeight);
					}
				}
				$(document).ready(function(){
					bsaProResize();
					$(window).resize(function(){
						bsaProResize();
					});
				});
			})(jQuery);
		</script>						<script>
							(function ($) {
								var bsaProContainer = $('.bsaProContainer-86');
								var number_show_ads = "0";
								var number_hide_ads = "0";
								if ( number_show_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeIn(); }, number_show_ads * 1000);
								}
								if ( number_hide_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeOut(); }, number_hide_ads * 1000);
								}
							})(jQuery);
						</script>
						</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">Headphones have become fashionable, and therefore headphones have become status symbols. If I had a dollar for every time my Neumann NDH20s have been selfied by passers-by and diners at the night markets of South East Asia, I’d have enough to buy a Lamy clutch for every Yuccie arguing the difference between a journal and a diary.</span></p>
<p class="p1"><span class="s1">Headphones have democratised high fidelity; for a few hundred dollars you can get a pair of high-status headphones that shrug “hold my beer” when pitted against thousands of dollars worth of speakers with their obligatory acoustic treatments and tightly-defined ‘sweet spots’. Headphones allow you to sit, stand, lay or move about wherever you like because <i>you</i> are the sweet spot; the room’s acoustics don’t even know you’re there.</span></p>
<p class="p1"><span class="s1">Every major shopping mall has at least one store dedicated to mobile audiophilia and ‘head-fi’, i.e. high fidelity audio through headphones. If the stuff they sell seems expensive then you’d better re-assess your priorities because it’s chicken-feed compared to what it costs to get similar performance from speakers and their obligatory acoustic treatments, <i>and</i> you can take it with you anywhere.</span></p>
<p class="p1"><span class="s1">Most significantly, headphones have supplanted speakers for the vast majority of music purchasing decisions and <i>active</i> music consumption (i.e. listening with <i>intent</i> rather than plastering sonic wallpaper over the background noise). Simultaneously, there has been a resurgence of interest in binaural recording and the immersive possibilities it offers <em>without</em> requiring a room full of speakers and the hope that the playback system adheres to the same format adopted during recording and mixing. Microphone manufacturers like Sennheiser and DPA have added binaural microphone systems to their product lines, and Neumann’s dummy head (aka ‘Fritz’) has reached new levels of celebrity. Popular music artists are now inserting binaural elements into their multitrack recordings, taking the headphone listener by surprise with sounds and voices appearing from beyond the musical soundstage.</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p class="p1"><span class="s1">As audio professionals, we would be foolish to underrate the significance of headphones in our mixing and monitoring decisions. Rather, we should be celebrating their popularity and exploiting their advantages. </span>It is little wonder that many of the younger generation of producers are doing most of their mixing work on headphones, and consider the big monitor speakers as being primarily for show. However, the key word in that phrase is &#8216;most&#8217;. Speakers bear little relevance to their world or their market, but they&#8217;re still cross-referencing on speakers, and they&#8217;ve got mastering engineers downstream ironing out the kinks while listening through big monitor speakers.</p>
<p class="p1"><span class="s1">What was once ridiculous is now mainstream. What is now ridiculous is placing <i>too much</i> significance on big monitor speakers and their obligatory acoustic treatments and inflexible sweet spots when mixing, because we’re living in a world where <i>most</i> people’s exposure to speaker reproduction is background music in cafés and shopping malls, platform announcements at train stations, and – topping the list – device notifications.</span></p>
<p class="p1"><span class="s1">Ding!</span></p>
<p class="p1"><span class="s1">I have hate mail…</span></p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-5299" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-5299 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >we would be foolish to underrate the significance of headphones</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-2371" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-2371 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"><h2 style="color: #2885f8;text-align: left;font-family:Source Sans Pro;font-weight:900;font-style:italic" class="vc_custom_heading wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft" ><a href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-2">Read on for differences between speaker mixing and headphone mixing…</a></h2></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div>
</section><p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/mixing-with-headphones-1">Mixing With Headphones 1</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.audiotechnology.com/tutorials/mixing-with-headphones-1/feed</wfw:commentRss>
			<slash:comments>6</slash:comments>
		
		
			</item>
		<item>
		<title>Microphones: Polar Response 2</title>
		<link>https://www.audiotechnology.com/tutorials/microphones-polar-response-2</link>
					<comments>https://www.audiotechnology.com/tutorials/microphones-polar-response-2#respond</comments>
		
		<dc:creator><![CDATA[Greg Simmons]]></dc:creator>
		<pubDate>Wed, 28 Jun 2023 23:00:16 +0000</pubDate>
				<category><![CDATA[Issue 88]]></category>
		<category><![CDATA[Microphones]]></category>
		<category><![CDATA[Tutorials]]></category>
		<category><![CDATA[2]]></category>
		<category><![CDATA[greg simmons]]></category>
		<category><![CDATA[issue]]></category>
		<category><![CDATA[microphones]]></category>
		<category><![CDATA[polar response]]></category>
		<category><![CDATA[tutorial]]></category>
		<guid isPermaLink="false">https://www.audiotechnology.com/?p=76268</guid>

					<description><![CDATA[<p> [...]</p>
<p><a class="btn btn-secondary understrap-read-more-link" href="https://www.audiotechnology.com/tutorials/microphones-polar-response-2">Read More...</a></p>
<p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/microphones-polar-response-2">Microphones: Polar Response 2</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></description>
										<content:encoded><![CDATA[<section class="wpb-content-wrapper"><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element  drop-cap" >
		<div class="wpb_wrapper">
			<p>In the previous instalment we defined ‘on-axis sound’ as being any sound that arrives directly <i>on-axis</i> to the microphone – a definition that, by default, means any sound that does not arrive on-axis is therefore ‘off-axis sound’. On-axis sound is almost certainly coming from whatever we’ve pointed the microphone at, so we can consider that to be desirable. What about off-axis sound? Is it good or bad?</p>
<p>In the close-miked world of popular music, whether in the studio or on stage, off-axis sound is generally considered bad – it’s usually coming from a different sound source that we don’t want to capture with the microphone we’re placing, and is especially bad if it is so far off-axis that it sounds dull and muddy. The whole point of close-miking is to focus on the required sound and capture it with as much isolation (i.e. lack of other sounds) as possible.</p>
<p>In the distant-miked worlds of nature recording and capturing atmos for film, the opposite is true: the majority of sound sources are off-axis, they might <i>all</i> be required, and the goal is to capture them all equally well.</p>
<p>The two-mic direct-to-stereo world of choral, chamber and similar acoustic music sits somewhere in between: not all of the musical sound sources are on-axis, and a large proportion of the off-axis sound (i.e. the reverberation of the performance space) must be captured at an appropriate balance with the direct sound of the music.</p>
<p>In distant-miking applications such as nature recording, atmos for film, and direct-to-stereo recording, the tonality of the off-axis sounds is vitally important.</p>
<h4><b>Distance &amp; Tonality</b></h4>
<p>Moving further from the sound source places additional demands on our mics that we rarely have to consider when close-miking. When we move a directional mic beyond approximately 30cm from the sound source we lose low frequency energy due to the proximity effect, as discussed in previous instalments of this series. Moving beyond approximately 60cm from the sound source creates challenges with our choices of polar response and mic placement, and moving even further exposes weaknesses that explain the hitherto inexplicable price difference between a small single-diaphragm cardioid condenser that costs $200 and one that costs $2000 when both <i>appear</i> to have the same basic specifications.</p>
<p>To understand these things we need to explore two important aspects of a microphone’s polar response. The first is its <i>Distance Factor</i>, which is an important part of microphone choice and placement. The second is a microphone’s <i>Off-Axis Response</i>, which is often what we’re paying for when we choose the $2000 mic over the $200 mic. To understand the relevance of Distance Factor and Off-Axis Response, we’ll also need to take a brief look at room acoustics. In this instalment we’re going to focus on room acoustics and Distance Factor. In the next instalment we’ll look at Off-Axis Response…</p>
<h4><b>DISTANCE FACTOR</b></h4>
<p>The Distance Factor is simply a number for comparing the directionality of different polar responses. A polar response with a high Distance Factor can be placed further from the sound source than a polar response with a low Distance Factor while still capturing the same balance of <em>direct sound</em> and <em>indirect sound</em>.</p>
<p>Let’s consider the <i>direct sound</i> to be the sound we want to capture on-axis from the sound source, and the <i>indirect sound</i> to be the reverberation of the room. Let’s also consider the reverberation of the room to be a true <i>diffuse field</i> – which ultimately means off-axis sound can arrive from any direction with equal probability, and the SPL is consistent throughout the room. These are the conditions in which the Distance Factor figure is accurate: it is, essentially, a mathematical indication of the level of the direct sound (on-axis) versus the level of sounds arriving from all other directions (off-axis) in a diffuse field. That seems simple to understand, and probably explains why most audio textbooks devote very few words to Distance Factor. But if we look further, so to speak, we’ll see that Distance Factor is worthy of its own instalment in this series because it ties together polar response, microphone placement and room acoustics. There’s not a lot to say <i>about</i> the Distance Factor, but there’s a lot to say <i>around</i> it. Let’s get started…</p>
<h4><strong>Direct Sound vs Reverberation</strong></h4>
<p>The relationship between the direct sound, the room’s reverberation and the distance from the sound source is shown in the graph below, with SPL on the vertical axis and distance on the horizontal axis. Note that on this graph the distance doubles with each equal-sized increment of the horizontal axis moving to the right, just as the SPL doubles (i.e. +6dB = 2x) with each equal-sized increment of the vertical axis moving upwards.</p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-6166" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-6166 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >Moving further from the sound source places additional demands on our mics that we rarely have to consider when close-miking.</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-2941" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-2941 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-2127851514]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/01-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="915" height="599" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/01-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="01-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/01-pichi.jpg 915w, https://www.audiotechnology.com/wp-content/uploads/2023/05/01-pichi-800x524.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/01-pichi-768x503.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/01-pichi-600x393.jpg 600w" sizes="(max-width: 915px) 100vw, 915px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The diagonal green line represents the SPL of the direct sound, on-axis from the sound source and on-axis to the microphone. In this particular example the graph shows that the direct sound’s SPL was 94dB at a distance of 1m in front of the sound source. From there we can see that the direct sound’s SPL decreases by 6dB (i.e. halves) with every doubling of distance, in accordance with the Inverse Square Law. So every increment in distance along the horizontal axis <i>doubles</i> the distance from the sound source and <i>halves</i> the direct sound’s SPL.</p>
<p>The horizontal blue line represents the SPL of the reverberation, which, as we’ve already stated, is a diffuse field creating an SPL that is consistent throughout the room regardless of the distance from the source. In this particular example it is 88dB SPL. Every increment along the horizontal axis <i>doubles</i> the distance from the sound source but has <i>no effect</i> on the reverberation’s SPL.</p>
<p>At 1m from the sound source we can see that the direct sound’s SPL of 94dB is 6dB higher than the reverberation’s SPL of 88dB. As we move further from the sound source we eventually reach a distance where the direct sound has the same 88dB SPL as the reverberation. This is known as the <i>Critical Distance</i>, and in this example it is 2m. At distances less than the Critical Distance the direct sound is dominant, while at distances greater than the Critical Distance the reverberation is dominant. Close mics are placed within the Critical Distance because they’re intended to capture more of the direct sound and less reverberation, while room mics are located beyond the Critical Distance because they’re intended to capture more reverberation and less direct sound. The fundamental goal of two-mic direct-to-stereo recording is to place the microphones at a distance that captures an appropriate balance of direct and reverberant sound, while also capturing a mix of the direct sounds that represents and serves the music.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>All of this sheds light on something we intuitively know: as we move the mic further from the sound source we get more reverberation (aka room sound). What really happens, as confirmed by the previous graph, is that as we move further from the sound source the reverberation <i>stays the same</i> but the direct sound <i>decreases</i>. The end result is perceptually the same: with more distance the room sound becomes more apparent. When we move the mic(s) further from the sound source we increase the gain to bring the direct sound back up to the desired level, and the increase in gain brings the reverberation up with it <i>unless we change to a polar response with a higher Distance Factor</i>.</p>
<h4><b>Comparing Distance Factors</b></h4>
<p>The illustration below provides a visual explanation of the Distance Factor values. On the left there is a single sound source (a speaker) placed in a large space. The reverberation of the space, which is evenly distributed throughout the room, is represented by the opaque blue background. Extending out from the speaker is a horizontal green line indicating the direct sound, on-axis from the speaker. A series of dots appear along the green line, each representing the distance and location that a particular polar response must be placed – relative to the omnidirectional polar response – to ensure that each polar response captures the same balance of direct and reverberant sound.</p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-6970" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-6970 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >...and the increase in gain brings the reverberation up with it <i>unless we change to a polar response with a higher Distance Factor</i>.</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-2335" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-2335 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-2924391437]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/02-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="595" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/02-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="02-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/02-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/02-pichi-800x472.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/02-pichi-768x453.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/02-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The first dot from the left represents the location of the omnidirectional polar response, which is the reference for all of the Distance Factors. It has been placed at a certain distance from the sound source (i.e. the speaker); it could be any distance we chose that provided the desired balance of direct sound and reverberation. For this example let’s use the Critical Distance, where the balance of direct sound and reverberant sound is the same.</p>
<p>It doesn’t matter what units of measurement we use to represent the distance between the sound source and the omnidirectional microphone; all that matters is that it is the distance required to place an omnidirectional microphone in front of <i>that</i> sound source in <i>that</i> room to capture our desired balance of direct sound and reverberant sound. We’ll call it a ‘unit of distance’, abbreviated to <i>d</i>. This gives the omnidirectional polar response a Distance Factor of 1.0, because its distance is 1.0 x <i>d</i> from the sound source.</p>
<p>[Note that, unlike the previous illustration, the horizontal axis in this graph is measured in linear multiples of distance. If you were to measure them you would find that the hypercardioid polar response <i>is</i> 2x further from the sound source than the omnidirectional polar response just as the graph shows, and the lobar/shotgun polar response <i>is</i> 3x further from the sound source than the omnidirectional polar response just as the graph shows.]</p>
<p>Moving beyond the omnidirectional polar response and along the line of the direct sound, we see a dot representing the subcardioid polar response. It has a Distance Factor of 1.2 which means it must be placed 1.2x further from the sound source than the omnidirectional microphone, or 1.2 x <i>d</i>, if we want it to capture the same balance of direct sound and reverberation. If the omnidirectional was 1m from the sound source and captured an equal balance of direct sound and reverberation, the subcardioid would have to be 1.2 x 1m = 1.2m from the sound source to capture the same balance of direct and reverberation. Likewise, if the omnidirectional was 2m from the sound source, the subcardioid would have to be 1.2 x 2m = 2.4m from the sound source to capture the same balance of direct sound and reverberation.</p>
<p>As we move further along the line of direct sound we see the hemispherical polar response at 1.4 x <i>d</i>, the cardioid and bidirectional polar responses both at 1.7 x <i>d</i>, the supercardioid polar response at 1.9 x <i>d</i>, the hypercardioid polar response at 2.0 x <i>d</i>, and the lobar/shotgun polar response at 3.0 x <i>d</i>. Despite the considerable differences between them, each polar response on the illustration will deliver the same balance of direct sound and reverberation <em>if</em> it is placed on the same axis as the omnidirectional polar response but at the distance determined by its Distance Factor. If each polar response captures the same balance of direct sound and reverberation, will they all sound the same?</p>
<p>No, because there’s more to acoustics than the direct sound and the reverberation. As we move further from the sound source we lose high frequencies due to air absorption, so the direct sound captured from further away will be duller than the direct sound captured from closer. Also, all of the polar responses shown here (apart from the omnidirectional) are directional and will therefore suffer from the proximity effect to some degree – which means they’ll capture less low frequency energy when placed at distances greater than approximately 30cm from the sound source.</p>
<p>Both of the problems described above are relatively easy to solve with EQ if necessary; although sometimes they can work to our favour if the sound source is too bright or too boomy. There are other problems that are not so easy to fix, and to understand those we need to take a shallow dive into…</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4><b>THE THREE Rs OF ACOUSTICS</b></h4>
<p>Acoustics is the study of sound behaviour in enclosed spaces (i.e. rooms). Most of that behaviour depends on a) the frequency of the sound, b) the dimensions and shape of the room, c) what the room’s surfaces are made from, and d) the amount of sound absorptive materials within the room. The interaction of these parameters brings us the well-known acoustic phenomena of <i>resonance</i>, <i>reflections</i> and <i>reverberation</i> – which collectively form the Three Rs of Acoustics.</p>
<p>Acousticians and architects use the term ‘cuboidal room’ to describe any room that has six surfaces (four walls, a floor and a ceiling), and in which all adjacent surfaces meet at 90° – even if the room is not actually a cube. So why do they call it <em>cuboidal</em>? If all of the room’s surfaces were adjusted to the same dimensions the room would have six surfaces of equal size, all connecting at 90° – which <em>is</em> a cube. Hence a room with six surfaces and in which all adjacent surfaces connect at 90° can be described as <em>cuboidal</em> because it shares many of the same properties as a cube.</p>
<p>For the following shallow dive into room acoustics we’re going to assume a cuboidal room of rectangular shape (as used in the preceding illustrations), which is referred to as a <em>rectangular cuboidal room</em>. We’re also going to assume the room’s surfaces have sufficient mass and rigidity to reflect frequencies below 20Hz – a theoretically convenient assumption that is actually quite difficult and expensive to achieve in practice.</p>
<p>With those qualifying remarks out of the way, let’s take a closer look at the Three Rs of Acoustics…</p>
<h4><b>R1: Resonance</b></h4>
<p>At low frequencies, where the wavelength is relatively large within the room, the sound energy behaves like huge waves moving back and forth between the room’s surfaces. At these frequencies <em>resonant behaviour</em> dominates, resulting in <em>resonating frequencies</em> or simply <em>resonance</em>.</p>
<p>Resonance occurs at any frequency that has a half-wavelength equal to one or more of the room’s dimensions (it also occurs at frequencies that have a half-wavelength equal to diagonal combinations of the room’s dimensions, but we’re not going to get Pythagorean here). Every resonance repeats at integer multiples of its fundamental frequency, creating harmonic resonances up until the wavelength becomes relatively small within the room – at which point the sound behaviour transitions from <em>resonance</em> to <em>reflections</em> (more about <em>reflections</em> shortly).</p>
<p>When a frequency is resonating the positive peaks of its waveform will always occur at one place within the room, and the negative peaks of its waveform will always occur at another place within the room. As a result, the waveform appears to be standing still – hence it is referred to as a <em>standing wave</em>. It <em>appears</em> to be standing still, but the resonating sound energy is actually moving back and forth over itself, re-tracing and overlapping the same waveform with every repetition. The overlapping waveforms reinforce, resulting in SPL increases of up to +6dB at the positive and negative peaks. Meanwhile, the points where the waveform crosses the zero point (between the positive peaks and the negative peaks) represent areas of no SPL (-∞dB) and are essentially <em>nulls</em> for that frequency.</p>
<p>If we place the microphone in a positive or negative peak of a standing wave we’ll capture a sound in which notes at or near the resonant frequency will boom out over other notes, and we’ll need to use EQ to fix them. Conversely, if we place the microphone in a null of a standing wave we’ll capture a sound in which notes at or near the resonant frequency will be too soft compared to other notes, and, again, we’ll need to use EQ to fix them. Moving the microphone to different positions within the room will capture different balances of the peaks and nulls of the standing waves.</p>
<p>A cuboidal room has three ‘modes’ of resonant behaviour, known as the <em>axial modes</em>, the <em>tangential modes</em> and the <em>oblique modes</em>. The <em>axial modes</em> are the most significant and the most problematic, and are therefore the modes we’ll be focusing on in this shallow dive into resonance…</p>
<p>The <em>axial modes</em> occur between any two opposing surfaces, therefore a cuboidal room has three <em>axial modes</em>: one along the axis of the room’s length, one along the axis of the room’s width, and one along the axis of the room’s height. Each axial mode will have a <em>fundamental resonance</em> at the frequency that has a half-wavelength equal to the room dimension it occurs within (i.e. length, width or height). If we know the room’s dimension we can calculate the fundamental resonant frequency for the axial mode with the following formula:</p>
<p><i>fr</i><i><sub>1 </sub></i>= <em>v</em> / (2 x <em>d</em>) Hz</p>
<p>Where <i>fr</i><i><sub>1</sub></i> is the fundamental resonant frequency in Hertz, <em>v</em> is the velocity of sound propagation in air in metres per second (344m/s at 21°C), and <em>d</em> is the length of the room dimension in metres.</p>
<p>Let’s say a room has a length of 10m, a width of 5.8m and a height of 5m. Its fundamental axial mode resonances are shown in the illustration below:</p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-4619" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-4619 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >…there’s more to acoustics than the direct sound and the reverberant sound.</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-7261" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-7261 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-4207660047]" href="https://www.audiotechnology.com/wp-content/uploads/2023/06/21-pichi-1.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="566" src="https://www.audiotechnology.com/wp-content/uploads/2023/06/21-pichi-1.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="21-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/06/21-pichi-1.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/06/21-pichi-1-800x449.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/06/21-pichi-1-768x431.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/06/21-pichi-1-600x337.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The SPL of each resonant mode is represented with coloured shading. Darker shading represents higher SPLs (up to +6dB against the walls), while the white area through the centre of each illustration represents a null in the SPL (-∞dB). The axial mode for the room’s length is represented in red, the axial mode for the room’s width is represented in green, and the axial mode for the room’s height is represented in gold. Each of these fundamental axial mode resonances will be accompanied by harmonic resonances that extend higher into the frequency spectrum until reaching a frequency where the sound behaviour transitions from <em>waves</em> to <em>reflections</em>.</p>
<p>Contrary to popular belief, the surfaces do not have to be strictly parallel to create resonance; if they are facing each other and have sufficient mass and rigidity to reflect the sound energy, a resonance will occur between them. Hence they are usually referred to as ‘opposing surfaces’ rather than ‘parallel surfaces’. [Parallel reflective surfaces create the familiar ‘ping’ of <i>flutter echo</i>, which is a form of <i>reflection</i> not <i>resonance</i>.]</p>
<p>What does this have to do with Distance Factor? The following examples demonstrate the combined effects of Distance Factor, polar response and resonance. For these examples we will primarily focus on the axial mode resonances for the room’s length, with occasional reminders that the same process is occurring for the room’s width and height – each with its own fundamental resonance and harmonic resonances.</p>
<p>The illustration below is the same as the earlier illustration for Distance Factor except the reverberation (opaque blue) has been replaced with shades of red representing the SPL of the fundamental axial mode for the length of the room. As with the preceding illustration, darker shades of red represent higher SPLs for <i>fr</i><i><sub>1</sub></i> (up to +6dB), lighter shades of red represent lower SPLs for <i>fr</i><i><sub>1</sub></i>, and the white areas represent nulls where <i>fr</i><i><sub>1</sub></i> is theoretically non-existent (-∞dB). The intensity of the red shading seen within each polar response shows how much of <i>fr</i><i><sub>1</sub></i> will be captured by that polar response at that position.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-826051340]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/03-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="595" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/03-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="03-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/03-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/03-pichi-800x472.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/03-pichi-768x453.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/03-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The illustration below shows the 2<sup>nd</sup> harmonic of the fundamental axial mode resonance. Because it is the 2<sup>nd</sup> harmonic its frequency is 2 x <i>fr</i><i><sub>1</sub></i>, which we’ll call <i>fr</i><i><sub>2</sub></i>. As we can see, it is simply two repetitions of the SPL behaviour seen for <i>fr</i><i><sub>1</sub></i> (above) squeezed into the same space. The SPL is still boosted by +6dB against the walls, as it is with <em>all</em> of the resonant modes, but now there is also a +6dB boost in the centre of the room – the same place where <i>fr</i><i><sub>1</sub></i> was in a null. Furthermore, at <i>fr</i><i><sub>2</sub></i> there are two nulls (represented by the two white vertical lines through the red shading).</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-3440543903]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/04-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="595" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/04-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="04-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/04-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/04-pichi-800x472.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/04-pichi-768x453.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/04-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>As with the previous illustration, the intensity of the red shading seen within each polar response shows how much of <i>fr</i><i><sub>2</sub></i> will be captured by that polar response at that position. There is not much difference between the SPL of <i>fr</i><i><sub>2</sub></i> contained within each polar response (as indicated by the darkness of the red shading), but if we compare it against the previous illustration we can see that the cardioid, bidirectional and hemispherical polar responses are capturing significantly higher levels of <i>fr</i><i><sub>2</sub></i> than they are of <i>fr</i><i><sub>1</sub></i> – even though their placements haven’t changed. Their placements (which are based on using their Distance Factors to ensure each microphone captures the same balance of direct sound and reverberation) put them all very close to a null for <i>fr</i><i><sub>1</sub></i> but a peak for <i>fr</i><i><sub>2</sub></i>, so they’ll be capturing <em>significantly less</em> of <i>fr</i><i><sub>1</sub></i> but <em>significantly more</em> of <i>fr</i><i><sub>2</sub></i>.</p>
<p>If <i>fr</i><i><sub>1</sub></i> happened to be the frequency of an important note in the music, that note’s fundamental frequency would sound as if it has been cut with EQ while its second harmonic would sound as if it has been boosted with EQ – as would the fundamental frequency of the note an octave above it. If the notes at <i>fr</i><i><sub>1</sub></i> and <i>fr</i><i><sub>2</sub></i> were <i>performed</i> at the same level, they would be <i>perceived</i> as being different levels and/or tonalities depending on where the microphone was placed in the room.</p>
<p>This resonant behaviour is not limited to <i>fr</i><i><sub>1</sub></i> and <i>fr</i><i><sub>2</sub></i>, however. It will continue up the harmonic series at 3 x <i>fr</i><i><sub>1</sub></i> (i.e. <i>fr</i><i><sub>3</sub></i>), 4 x <i>fr</i><i><sub>1</sub></i> (i.e. <i>fr</i><i><sub>4</sub></i>), 5 x <i>fr</i><i><sub>1</sub></i> (<i>fr</i><i><sub>5</sub></i>), 6 x <i>fr</i><i><sub>1</sub></i> (<i>fr</i><i><sub>6</sub></i>) and so on, with each resonant mode creating its own series of peaks and nulls across the room’s length as shown below.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-2853001962]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/05-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="605" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/05-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="05-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/05-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/05-pichi-800x480.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/05-pichi-768x460.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/05-pichi-600x360.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>We can see from the illustrations above that even though the Distance Factor placements ensure each mic captures the same balance of direct sound and reverberation, the levels of the resonating frequencies could vary significantly between them and thereby affect the tonality of each microphone differently.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>As a matter of interest, unwanted resonances are controlled by the strategic placement of <i>tuned absorption</i>. This typically consists of a sealed enclosure with a port or diaphragm that’s designed to resonate at the frequency that needs to be absorbed, and is typically placed in a peak of the resonance’s SPL. The enclosure contains absorptive material to dissipate any sound energy that is resonating within it, which it has, of course, taken out of the room. To understand why we need to use tuned absorption rather than a sheet of foam, we have to understand the scale of the problem. For example, let’s consider a resonance occurring at 20Hz. At a room temperature of 21°C, one cycle of 20Hz is 17.2m long and is travelling through the air at a velocity of 344 metres per second (that’s the velocity of sound propagation, aka ‘the speed of sound’, at 21°C). There are 20 of them occurring every second, and they’re all joined end-to-end to form the equivalent of an acoustic freight train travelling at the speed of sound through the room. No matter how much we <i>want</i> to believe the wish-casting for the cheap and easy ‘sheet of foam’ solution, the reality is that we’re not going to stop something that big and that fast by sticking a sheet of foam on a wall. That’s as futile as trying to stop a runaway bus by putting a line of traffic cones across the road. Resonance is just one part of the physics of sound, and that is why we have acousticians – but let’s get back to microphones…</p>
<h4><b>R2: Reflections</b></h4>
<p>As the frequency gets higher its wavelength gets shorter, eventually becoming relatively small within the room. At these frequencies the sound becomes more directional. Rather than creating resonance, it behaves like a ray of light reflecting off a mirror and therefore <i>ray theory</i> applies.</p>
<p>The basic rule for reflections is:</p>
<p><i>Angle of Reflection</i> = <i>Angle of Incidence</i></p>
<p>If the sound energy hits the wall at, say, 45° to the left side of the reflection point, it will be reflected at 45° to the right side of the reflection point. Similarly, if the sound energy hits the wall at 30° to the left side of the reflection point, it will be reflected at 30° to the right side of the reflection point. The reflected sound energy will be the mirror image of the incident sound energy.</p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-5161" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-5161 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >Resonance is just one part of the physics of sound, and that is why we have acousticians…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-4955" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-4955 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-3254258957]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/06-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="595" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/06-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="06-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/06-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/06-pichi-800x472.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/06-pichi-768x453.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/06-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The most problematic reflections are known as the <i>first order reflections</i>, where ‘first order’ means they have reflected off one surface only before reaching the microphone or listener. They are usually the first reflections to reach the microphone or listener after the direct sound, which means they have travelled the shortest distances of all of the possible reflections within the room. Therefore they will have the highest SPLs <i>and</i> the shortest delay times of all of the possible reflections within the room – a combination that makes them the biggest risk for audible comb filtering problems.</p>
<p>The illustration below shows the first order reflections that would occur between a sound source (a speaker) and an omnidirectional microphone placed within a room. Note that this ‘floor plan’ illustration only shows the first order reflections coming from the walls; there will also be first order reflections from the floor, the ceiling and any other large reflective surfaces within the room, but they’re not shown here. Also note that in this example the speaker has been offset from the centre line of the room to make the individual reflections easier to identify. If the speaker and the mic were both on the central horizontal axis of the image it would be harder to distinguish the first order reflections coming from the walls at the left and right sides of the illustration because those reflections would overlap each other and form a single horizontal line running through the illustration.</p>
<p>We can see that each reflection (shown in blue) behaves the same as a beam of light reflecting off a mirror, or a billiard ball bouncing off a cushion – assuming it has no spin.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-1368738982]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/07-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="595" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/07-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="07-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/07-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/07-pichi-800x472.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/07-pichi-768x453.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/07-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Collectively, the first order reflections – and some of the reflections that follow them before the sound in the room reaches the <i>onset of steady state energy</i> (see &#8216;R3: Reverberation&#8217; below) – are known as <i>early reflections</i>, and our ear/brain system uses them to create an impression of the size of the room and where we (or our microphones) are placed within it. What has this got to do with Distance Factor?</p>
<p>As we know, the Distance Factor allows us to determine how far we can place different polar responses from the sound source to ensure that each one captures the same balance of direct sound and reverberation. However, for any given room and sound source location, every different microphone position will capture a different set of early reflections. In the illustration below, a microphone with a hypercardioid polar response has been added to the previous illustration and placed twice as far from the sound source as the omnidirectional microphone, in accordance with the hypercardioid’s Distance Factor of 2.0. This placement ensures that the omnidirectional and hypercardioid polar responses will both capture the same balance of direct sound and reverberation. However, each captures a different set of early reflections with different arrival times and different SPLs. Despite having the same balance of direct sound and reverberation, the different early reflections will give the hypercardioid a different tonality to the omnidirectional while also creating a different sense of distance from the sound source <i>and</i> a different sense of location within the space.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-839708417]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/08-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="596" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/08-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="08-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/08-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/08-pichi-800x473.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/08-pichi-768x454.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/08-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The following illustration is the same as the previous illustration <em>except</em> the hypercardioid has been replaced with a cardioid placed at 1.7 x the omnidirectional’s distance from the sound source, in accordance with the cardioid’s Distance Factor. Both polar responses will capture the same balance of direct sound and reverberation, but, as with the previous example, they will capture different sets of early reflections and therefore will have different tonalities along with different impressions of distance from the sound source and location within the space.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-1256699240]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/09-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="595" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/09-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="09-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/09-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/09-pichi-800x472.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/09-pichi-768x453.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/09-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The previous two illustrations make it easy to see the differences in the early reflections that arrive at each microphone. The further a reflection travels to reach a microphone the later its arrival time and its SPL (relative to the direct sound), and therefore the less potential it has to create comb filtering or other audible problems. However, we have not yet factored in the effect of the individual polar responses on the reflections they receive.</p>
<p>The illustration below is the same as the previous illustration but with unnecessary visual items removed (including the direct sound), and with each reflection numbered for easier identification. The figures in the bottom left corner show how much reduction each reflection receives due to the angle that it enters the cardioid polar response. The dB figures are approximate and have been rounded up or down as appropriate for the sake of clarity. We can see that reflections one and two are the strongest; both arrive within the cardioid polar response’s Acceptance Angle (±60°) and will therefore have less than 3dB of reduction. Reflection three arrives at approximately 80° off-axis and will be reduced by -5dB, while reflection four is reduced by about -25dB and is probably insignificant – especially if we factor in its loss of SPL due to the considerably longer distance it has travelled compared to the other reflections (we’ll save that mathematical gymnastics for a later instalment).</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1595296124081 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner vc_custom_1595990618195"><div class="wpb_wrapper"><div id="bsa-block-970--450" class="bsaProContainerNew bsaProContainer-86 bsa-block-970--450 bsa-pro-col-1" style="display: block !important"><div class="bsaProItems bsaGridNoGutter " style="background-color:"><div class="bsaProItem bsaReset" data-animation="fadeIn" style=""><div class="bsaProItemInner" style="background-color:"><div class="bsaProItemInner__thumb"><div class="bsaProAnimateThumb" style="display: block;margin: auto;"><a class="bsaProItem__url" href="https://www.audiotechnology.com/advertise?sid=86&bsa_pro_id=743&bsa_pro_url=1" target="_blank"><div class="bsaProItemInner__img" style="background-image: url(&#39;https://www.audiotechnology.com/wp-content/uploads/bsa-pro-upload/1673238775-Link-Audio_Revelator_PA-pichi.jpg&#39;)"></div></a></div></div></div></div></div></div><script>
			(function($){
				function bsaProResize() {
					var sid = "86";
					var object = $(".bsaProContainer-" + sid);
					var imageThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__img");
					var animateThumb = $(".bsaProContainer-" + sid + " .bsaProAnimateThumb");
					var innerThumb = $(".bsaProContainer-" + sid + " .bsaProItemInner__thumb");
					var parentWidth = "970";
					var parentHeight = "450";
					var objectWidth = object.parent().outerWidth();
//					var objectWidth = object.width();
					if ( objectWidth <= parentWidth ) {
						var scale = objectWidth / parentWidth;
						if ( objectWidth > 0 && objectWidth !== 100 && scale > 0 ) {
							animateThumb.height(parentHeight * scale);
							innerThumb.height(parentHeight * scale);
							imageThumb.height(parentHeight * scale);
							object.height(parentHeight * scale);
						} else {
							animateThumb.height(parentHeight);
							innerThumb.height(parentHeight);
							imageThumb.height(parentHeight);
							object.height(parentHeight);
						}
					} else {
						animateThumb.height(parentHeight);
						innerThumb.height(parentHeight);
						imageThumb.height(parentHeight);
						object.height(parentHeight);
					}
				}
				$(document).ready(function(){
					bsaProResize();
					$(window).resize(function(){
						bsaProResize();
					});
				});
			})(jQuery);
		</script>						<script>
							(function ($) {
								var bsaProContainer = $('.bsaProContainer-86');
								var number_show_ads = "0";
								var number_hide_ads = "0";
								if ( number_show_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeIn(); }, number_show_ads * 1000);
								}
								if ( number_hide_ads > 0 ) {
									setTimeout(function () { bsaProContainer.fadeOut(); }, number_hide_ads * 1000);
								}
							})(jQuery);
						</script>
						</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-2580298520]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/10-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="595" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/10-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="10-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/10-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/10-pichi-800x472.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/10-pichi-768x453.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/10-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Importantly, the cardioid and bidirectional polar responses have the same Distance Factor, so putting them at the same distance from the sound source means both will capture the same balance of direct sound and reverberation. They’ll also receive the same early reflections, but will those reflections be captured in the same balance? No.</p>
<p>The illustration below replaces the cardioid polar response from the illustration above with the bidirectional polar response. As with the previous illustration, reflections one and two arrive within the bidirectional polar response’s Acceptance Angle (±45°) and will therefore have less than 3dB of reduction. Things are different with reflections three and four, however. Reflection three will receive about 15dB of reduction, while reflection four will receive less than a dB of reduction.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-1056763153]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/11-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="595" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/11-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="11-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/11-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/11-pichi-800x472.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/11-pichi-768x453.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/11-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Switching from cardioid to bidirectional in this example has had little effect on reflections one and two, but a significant impact on the balance of reflections three and four. The two different polar responses, in the same location, capture the same balance of direct sound and reverberation, but different balances of the same early reflections.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-792755528]" href="https://www.audiotechnology.com/wp-content/uploads/2023/06/12b-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="595" src="https://www.audiotechnology.com/wp-content/uploads/2023/06/12b-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="12b-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/06/12b-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/06/12b-pichi-800x472.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/06/12b-pichi-768x453.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/06/12b-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The illustration above shows what happens when we change the angle of the microphone. In this case, we have rotated the bidirectional polar response by 90°; the kind of thing that might happen when a well-meaning non-engineer mistakes your Royer R121 for a shotgun mic and re-adjusts its angle to aim the end of the mic at the sound source. As we can see, the direct sound (green) has been completely rejected by the bidirectional’s side null, while the balance of the reflections it captures is significantly different. We’ll delve further into these kinds of on-axis and off-axis polar response calculations in a forthcoming instalment of this series.</p>
<p>As a matter of interest, unwanted reflections can be controlled by the strategic placement of <i>broadband absorption</i> to absorb the reflection (i.e. sticking sculpted open cell foam on the reflection point), or by the use of <i>appropriately angled surfaces</i> to re-direct the reflection elsewhere (angled walls, portable baffles and gobos with a reflective surface), or by the use of <i>diffusion</i> to scatter the reflected energy in numerous directions (irregular surfaces, quadratic residue diffusors, etc.). Reflections are just part of the physics of sound, and that is why we have acousticians – but let’s get back to microphones…</p>
<h4><b>R3: Reverberation</b></h4>
<p>If the sound source is continuous (let’s say the speaker is reproducing a 1kHz sine wave at a constant SPL) the sound energy will continue reflecting around the room, creating new pathways in accordance with the ‘angle of reflection = angle of incidence’ rule, until the reflections eventually fade out. Each reflection is like a billiard ball rolling across the table and bouncing off the cushions until it eventually dissipates all of the energy that made it start moving – at which point it has rolled to a stop. [The difference between the reflection and the billiard ball in this analogy is that the reflection does not slow down as it loses energy, it maintains the same velocity but loses SPL instead. Where the billiard ball rolls to a stop, the reflection fades to inaudibility.]</p>
<p>Eventually there will come a point where the sound energy travelling around the room is being dissipated ‘out’ of the room (by absorption, transmission through walls, and the Inverse Square Law) at the same rate that it is being put into the room by the sound source (e.g. a speaker). The overall sound energy in the room reaches a ‘break-even point’ where <i>energy in = energy out</i>, resulting in a consistent SPL. This point in time is known as the <i>onset of steady state energy</i>, and is the beginning of reverberant behaviour in the room. If the sound source suddenly stops we’ll clearly hear the characteristic sound of <i>reverberation</i> as hundreds, perhaps thousands, of individual reflections are absorbed out of the room – fading to silence one after another.</p>
<p>For any given room, the level of the reverberation and how long it takes to dissipate after the sound source stops (aka the <i>reverberation time</i> or <i>Rt</i><i><sub>60</sub></i>) is determined by the amount of absorption in the room. This includes the absorptive contribution of tuned absorbers that are designed to control resonances, the absorptive contribution of broadband absorption and diffusion that has been placed to control early reflections, the absorption of soft furnishings, and also the absorptive contribution (if significant) of people within the room. In most cases, additional absorption and/or diffusion will be placed around the room to achieve the desired <i>reverberation curve</i> (a graph of reverberation time vs frequency). Getting the correct reverberation curve (as represented on a graph of reverberation time versus frequency) for the room’s intended purpose is just part of the physics of sound, and that is why we have acousticians – but let’s get back to microphones…</p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-3362" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-3362 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >Reflections are just part of the physics of sound, and that is why we have acousticians…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-2797" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-2797 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4><b>R is for Revision</b></h4>
<p>So the Three Rs of Acoustics are <i>resonance</i>, <i>reflections</i> and <i>reverberation</i>. What do they have to do with Distance Factor?</p>
<p>We know that reverberation is used as a reference to define the Distance Factor, and we know that the reverberation’s SPL won’t change no matter where the microphone is placed within the room – assuming it is a true diffusive field.</p>
<p>We also know that, for any given location of the sound source within the room, different microphone locations will result in different balances of the direct sound, the resonances and the reflections<i> relative to the level of the reverberation</i>. As a matter of interest, it also works the other way: for any given placement of the microphone within the room, different locations of the sound source will result in different balances of the direct sound, the resonances and the reflections <i>relative to the level of reverberation</i> captured by the microphone. In other words, moving the microphone and/or the sound source will affect the balance of direct sound, resonance and reflections (relative to the reverberation) captured by the microphone.</p>
<p>This same acoustic behaviour has major ramifications in control room design because the placement of the monitor speakers and the placement of the listening position both have an impact on the monitored sound heard by the engineer. Getting those monitor and listening positions right is just part of the physics of sound, and that is why we have acousticians – but let’s get back to microphones…</p>
<h4><b>MICS ARE ACOUSTIC SUMMING MIXERS</b></h4>
<p>The illustration below shows an omnidirectional polar response placed at a distance <i>d</i> from the sound source. We can think of this microphone as a two-channel acoustic summing mixer with one input for the direct sound and one input for the reverberation.</p>
<p>The ‘fader’ for adjusting the level of the direct sound is the <em>distance</em> between the sound source and the microphone: changing the microphone’s distance from the sound source changes the level of the direct sound it captures without affecting the level of the reverberation.</p>
<p>The ‘fader’ for adjusting the level of the reverberation is the <em>Distance Factor</em>: for any given distance, changing the microphone’s Distance Factor will change the level of the reverberation it captures without affecting the level of the direct sound.</p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-4392" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-4392 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >Getting the correct reverberation curve for the room’s intended purpose is just part of the physics of sound, and that is why we have acousticians…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-7000" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-7000 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-3197551777]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/13-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="596" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/13-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="13-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/13-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/13-pichi-800x473.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/13-pichi-768x454.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/13-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Sound engineering would be easy if we <i>only</i> had to work with the direct sound and the reverberation. Moving the microphone closer to the sound source would give us more direct sound relative to the reverberation, moving the microphone further from the sound source would give us less direct sound relative to the reverberation, and changing the microphone’s Distance Factor would allow us to increase or decrease the amount of reverberation at any given microphone location: a lower Distance Factor will capture more reverberation, and a higher Distance Factor will capture less reverberation. If we liked the direct sound of an instrument at a particular miking distance but wanted more or less reverberation, we could simply choose a polar response with a different Distance Factor.</p>
<p>The only practical situation that allows us to work primarily with the direct sound and the room’s reverberation is distant miking in a concert hall or similar venue. It’s an <i>acoustically large space</i>, big enough to ensure that a) all fundamental resonant frequencies are below the audible bandwidth, b) first order reflections from the walls and ceiling have to travel so far from the sound source to the microphone that they’re rarely a problem, and c) some acoustic instruments <em>require</em> first order reflections off the stage floor and into the room to reinforce their SPL in the room. In these idealised circumstances, changing the distance between the microphones and the sound source, and/or changing the microphones’ Distance Factors (by changing their polar responses), allows us to alter the balance of direct sound and reverberation. In the concert hall situation the Distance Factor in practice approaches the Distance Factor in theory – at least as far as we’ve defined it here.</p>
<p>Apart from concert halls and similar acoustically large spaces, we are often limited to working in <i>acoustically small spaces</i> where resonance and reflections can be problematic.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>To represent a frequency properly we must be able to fit at least half a wavelength of that frequency into the room. Therefore, for the purposes of this discussion, we will consider a room to be ‘acoustically small’ if any of its dimensions (i.e. length, width or height) is less than 8.6m and therefore cannot fit a half-wavelength of 20Hz – which is the lowest frequency of interest for most audio applications. A room may seem big to us as human beings, but if one or more of its dimensions is less than 8.6m it will be sonically claustrophobic to sound energy at 20Hz (remember, one cycle of 20Hz is 17.2m long in the air). In acoustically small spaces we can expect audible room resonances that will require tuned absorption to control them. We can also expect problems from first order reflections because they have travelled relatively short distances and will therefore have short arrival times accompanied by significant SPLs, meaning we need to listen carefully for comb filtering problems and a general ‘roomy’ or ‘boxy’ tonality. Broadband absorption and/or diffusion will be required to control the first order reflections, and more will then be added to bring the room’s reverberation curve to specification.</p>
<p>Most multitrack studios that are designed for recording popular music use acoustically small rooms. The engineer deals with the above-mentioned problems by a) careful placement of instruments and microphones within the room to minimise the effects of resonant modes, b) close-miking techniques with directional microphones to minimise the capture and significance of off-axis sounds, and c) the use of portable gobos (used as isolators, absorbers, diffusors and/or deflectors) to prevent first order reflections and spill from reaching the microphones. Similar close-miking and isolation techniques are used on stage when providing sound reinforcement. We’ll be looking at these studio and stage techniques in a forthcoming instalment of this series.</p>
<p>In most of the above cases (the concert hall and the multitrack recording studio) there has been acoustic treatment applied to control some or all of the Three Rs, which can make a significant improvement if done right and might even make many of the above-mentioned problems go away.</p>
<h4><strong>SUMMING IT ALL UP…</strong></h4>
<p>Let’s get back to a worst-case scenario where there has been no acoustic treatment. The illustration below is the same as the previous illustration, but the opaque blue background that represents the room’s reverberation has now been overlaid with the fundamental resonance of the room’s length, <i>fr</i><i><sub>1</sub></i>, in red, and the first order reflections from the sound source to the microphone in dark blue.</p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-2345" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-2345 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >... we will consider a room to be ‘acoustically small’ if any of its dimensions (i.e. length, width or height) is less than 8.6m…</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-2911" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-2911 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-1727871092]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/14-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="596" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/14-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="14-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/14-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/14-pichi-800x473.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/14-pichi-768x454.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/14-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>From this we can represent the microphone as a three-channel acoustic summing mixer with inputs for direct sound, early reflections, and reverberation, along with an equaliser on the output to represent the effects of resonances. Where we place the microphone, which Distance Factor we choose, which polar response and which angle we use <em>all</em> affect how those three inputs will be mixed and how much the resonances will affect the low frequency spectrum of the signal presented at the output of the microphone.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-178800896]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/15-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="991" height="337" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/15-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="15-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/15-pichi.jpg 991w, https://www.audiotechnology.com/wp-content/uploads/2023/05/15-pichi-800x272.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/15-pichi-768x261.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/15-pichi-600x204.jpg 600w" sizes="(max-width: 991px) 100vw, 991px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Note that the input for the early reflections also includes ‘indirect sound’, which allows for other sounds occurring in the room that are arriving off-axis (other musical instruments, audience noise, etc.) and which we might want to minimise by choosing a polar response that puts a rejection null facing the unwanted sound source.</p>
<p>[If we wanted to get hyper-detailed with this ‘summing mixer’ analogy we could include separate inputs for each of the first order reflections and for each source of indirect sound/spill, allowing us to control their individual levels based on their angle of incidence to the microphone’s polar response. We don’t need to do that here, but we will be calculating these types of things in a forthcoming instalment of this series.]</p>
<p>It is important to remember that knowing a microphone’s Distance Factor does not <i>always</i> mean you can assume its polar response. The cardioid polar response and the bidirectional polar response both have the same Distance Factor of 1.7, but have very different rejection nulls and acceptance angles. As we saw earlier, if placed in the same position in the room, each will capture the same balance of direct sound and reverberation but a different balance of early reflections and other indirect sounds.</p>
<p>The illustration below adds a second mic – a hypercardioid – on the same axis as the omnidirectional but at twice the distance from the sound source, as shown in an earlier illustration. Because the hypercardioid has been placed in accordance with its Distance Factor of 2.0 (i.e. at twice the distance of the omnidirectional from the sound source), it will capture the same balance of direct sound and reverberation as the omnidirectional, but, as we can see in the illustration, there will be slightly different levels of resonance (represented as different shades of red within each polar response) and a significantly different set of early reflections with their own arrival times and SPLs.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-1187210152]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/16-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="596" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/16-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="16-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/16-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/16-pichi-800x473.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/16-pichi-768x454.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/16-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The illustration below is the same as the previous illustration but with the 2<sup>nd</sup> harmonic of the resonance (<i>fr</i><i><sub>2</sub></i>) overlaid in place of the fundamental (<i>fr</i><i><sub>1</sub></i>). Both mics capture the same balances of direct sound, <i>fr</i><i><sub>1</sub></i>, reverberation and early reflections as seen in the previous illustration, but we can see that the levels of <i>fr</i><i><sub>2</sub></i> in both mics will be slightly higher than the levels of <i>fr</i><i><sub>1</sub></i> shown in the previous illustration, meaning notes at or close to the frequency of <i>fr</i><i><sub>2</sub></i> will be louder than notes at or close to the frequency of <i>fr</i><i><sub>1</sub></i> in the output signal from the microphone.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-62428032]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/17-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="596" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/17-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="17-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/17-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/17-pichi-800x473.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/17-pichi-768x454.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/17-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>The illustration below is the same as the previous two illustrations but with the 3<sup>rd</sup> harmonic of the resonance (<i>fr</i><i><sub>3</sub></i>) overlaid in place of the fundamental (<i>fr</i><i><sub>1</sub></i>) and the 2<sup>nd</sup> harmonic (<i>fr</i><i><sub>2</sub></i>). Both mics capture the same balances of direct sound, <i>fr</i><i><sub>1</sub></i>, <i>fr</i><i><sub>2</sub></i>, reverberation and early reflections as they did in the previous illustrations, but we can see that the levels of <i>fr</i><i><sub>3</sub></i> in both mics will be significantly higher than the levels of <i>fr</i><i><sub>1</sub></i> and <i>fr</i><i><sub>2</sub></i> as shown in the previous illustrations. The omnidirectional microphone is almost on top of a peak in <i>fr</i><i><sub>3</sub></i>, while the hypercardioid is also capturing a much stronger amount of <i>fr</i><i><sub>3</sub></i> compared to levels of <i>fr</i><i><sub>1</sub></i> and <i>fr</i><i><sub>2</sub></i> it is capturing.</p>

		</div>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_center  wpb_animate_when_almost_visible wpb_fadeInLeft fadeInLeft">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-2303419286]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/18-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="1009" height="596" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/18-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="18-pichi" loading="lazy" srcset="https://www.audiotechnology.com/wp-content/uploads/2023/05/18-pichi.jpg 1009w, https://www.audiotechnology.com/wp-content/uploads/2023/05/18-pichi-800x473.jpg 800w, https://www.audiotechnology.com/wp-content/uploads/2023/05/18-pichi-768x454.jpg 768w, https://www.audiotechnology.com/wp-content/uploads/2023/05/18-pichi-600x354.jpg 600w" sizes="(max-width: 1009px) 100vw, 1009px" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<p>Moving either of the microphones in the above illustrations to different positions within the room will result in different balances of direct sound, resonances and reflections <i>relative to the level of the reverberation</i> – as will changing the polar response or even simply changing the direction the microphone is facing. The microphone is, essentially, a passive summing mixer with an EQ on the output, and we control the mix and the EQ by where we place the microphone in the room and which polar response and Distance Factor we choose.</p>
<h4><b>SUMMARY</b></h4>
<p>Distance Factor is a simple yet binding concept that brings together a significant amount of audio theory and practice in a way that few other audio concepts can. There’s not a lot to say <i>about</i> the Distance Factor, but there’s a lot to say <i>around</i> it.</p>
<p>The placement of the sound source within the room determines which resonances and reflections it creates, and where we place the microphone determines which resonances and reflections it captures. The polar response determines how much of the early reflections and indirect sound the microphone captures, and the Distance Factor determines how much reverberation it captures. Small changes in distance, angle, polar response and Distance Factor can make big differences to the captured sound.</p>
<p>The next time you’re making a slight change to a microphone’s distance or angle and some dumbass musicians cynically scoff “as if that’s going to make any difference”, remember those same dumbasses are paying you to do something they cannot do – capture an acceptable sound quickly and consistently based on an understanding of the complex relationships between the sound source, the polar response, the resonances, the reflections and the reverberation. That is why we are sound engineers – but let’s get back to microphones…</p>

		</div>
	</div>
</div></div></div><div class="wpb_animate_when_almost_visible wpb_fadeInRight fadeInRight wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1679444872148"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-open" ></i></div><div class="icon_description" id="Info-list-wrap-1121" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-1121 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div><h2 style="text-align: left;font-family:Playfair Display;font-weight:700;font-style:normal" class="vc_custom_heading" >There’s not a lot to say <em>about</em> the Distance Factor, but there’s a lot to say <em>around</em> it.</h2><div class="smile_icon_list_wrap ult_info_list_container ult-adjust-bottom-margin   vc_custom_1683167741851"><ul class="smile_icon_list left square with_bg"><li class="icon_list_item" style=" font-size:150px;"><div class="icon_list_icon" data-animation="" data-animation-delay="03" style="font-size:50px;border-width:1px;border-style:none;background:rgba(255,255,255,0.01);color:#0c0c0c;border-color:#333333;"><i class="icomoon-serif-quote-close" ></i></div><div class="icon_description" id="Info-list-wrap-1099" style="font-size:50px;"><div class="icon_description_text ult-responsive"  data-ultimate-target='#Info-list-wrap-1099 .icon_description_text'  data-responsive-json-new='{"font-size":"desktop:13px;","line-height":"desktop:18px;"}'  style=""></div></div><div class="icon_list_connector"  style="border-right-width: 1px;border-right-style: dashed;border-color: #333333;"></div></li></ul></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_empty_space"   style="height: 24px"><span class="vc_empty_space_inner"></span></div><div class="vc_separator wpb_content_element vc_separator_align_center vc_sep_width_100 vc_sep_pos_align_center vc_separator_no_text vc_sep_color_grey" ><span class="vc_sep_holder vc_sep_holder_l"><span class="vc_sep_line"></span></span><span class="vc_sep_holder vc_sep_holder_r"><span class="vc_sep_line"></span></span>
</div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-8"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_row wpb_row vc_inner vc_row-fluid vc_row-o-equal-height vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-3"><div class="vc_column-inner vc_custom_1683167685311"><div class="wpb_wrapper"><div class="vc_icon_element vc_icon_element-outer vc_custom_1665096333010 wpb_animate_when_almost_visible wpb_slideInLeft slideInLeft vc_icon_element-align-right"><div class="vc_icon_element-inner vc_icon_element-color-custom vc_icon_element-size-lg vc_icon_element-style- vc_icon_element-background-color-grey" ><span class="vc_icon_element-icon far fa-hand-point-right" style="color:#ff4d21 !important"></span></div></div></div></div></div><div class="wpb_column vc_column_container vc_col-sm-9"><div class="vc_column-inner"><div class="wpb_wrapper"><h2 style="color: #44ddd8;text-align: left;font-family:Abril Fatface;font-weight:400;font-style:normal" class="vc_custom_heading wpb_animate_when_almost_visible wpb_bounceInRight bounceInRight" ><a href="https://www.audiotechnology.com/regulars/ribbon-microphones" target="_blank">Next instalment: Off-Axis Response (coming soon)</a></h2></div></div></div></div></div></div></div><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-12"><div class="vc_column-inner"><div class="wpb_wrapper"><div class="vc_empty_space"   style="height: 24px"><span class="vc_empty_space_inner"></span></div></div></div></div></div><div class="vc_row wpb_row vc_row-fluid vc_row-o-content-middle vc_row-flex"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div><div data-vc-full-width="true" data-vc-full-width-init="false" class="vc_row wpb_row vc_row-fluid vc_custom_1685321331265 vc_row-has-fill"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div  class="wpb_single_image wpb_content_element vc_align_left">
		
		<figure class="wpb_wrapper vc_figure">
			<a class="" data-lightbox="lightbox[rel-76268-2867724160]" href="https://www.audiotechnology.com/wp-content/uploads/2023/05/19_BG-pichi.jpg" target="_self" class="vc_single_image-wrapper   vc_box_border_grey"><img width="323" height="511" src="https://www.audiotechnology.com/wp-content/uploads/2023/05/19_BG-pichi.jpg" class="vc_single_image-img attachment-full" alt="" decoding="async" title="19_BG-pichi" loading="lazy" /></a>
		</figure>
	</div>
</div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper">
	<div class="wpb_text_column wpb_content_element " >
		<div class="wpb_wrapper">
			<h4><b>SCALING THE DISTANCE FACTOR</b></h4>
<p>All of the polar responses shown throughout this instalment have been reproduced to scale, where the 0° on-axis point represents the same level of 0dB. If placed in the same position, and on-axis to the same sound source, each will produce the same output level from the direct sound (assuming they all had the same Sensitivity and were given the same gain). The size differences in their visual representations therefore indicate how much indirect sound, reflections and reverberation each polar response captures compared to the level of direct sound.</p>
<p>The lobar/shotgun polar response <i>looks</i> considerably smaller than the omnidirectional polar response because it captures considerably less of the surrounding sound field, but if both mics had the same Sensitivity and were placed at the same distance from the sound source, and if both mics’ preamps were set to the same gain, both mics would capture and output and the same level of the direct sound.</p>
<p>When viewed to scale this way, we can see that as the polar responses become more directional they don’t really become <i>more</i> sensitive to on-axis sound; rather, they become <i>less</i> sensitive to off-axis sounds. And <i>that’s</i> what the Distance Factor tells us…</p>

		</div>
	</div>
</div></div></div></div><div class="vc_row-full-width vc_clearfix"></div><div class="vc_row wpb_row vc_row-fluid"><div class="wpb_column vc_column_container vc_col-sm-2"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-6"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div><div class="wpb_column vc_column_container vc_col-sm-4"><div class="vc_column-inner"><div class="wpb_wrapper"></div></div></div></div>
</section><p>The post <a rel="nofollow" href="https://www.audiotechnology.com/tutorials/microphones-polar-response-2">Microphones: Polar Response 2</a> appeared first on <a rel="nofollow" href="https://www.audiotechnology.com">AudioTechnology</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.audiotechnology.com/tutorials/microphones-polar-response-2/feed</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
