The End of the Loudness War
This article first appeared on ProRec in October of 2009 (archive.org)
It’s been almost seven years since Over the Limit was published, and very little has changed in the world of mastering.
Sadly, the Loudness Wars continue unabated. Last year, Metallica’s Death Magnetic drew worldwide ire for its “sounds like dogshit” mastering job – and worldwide laughs when fans started ripping the better-mastered version of the music off the Guitar Hero game. When the video game version sounds better than the actual CD, you know something has gone terribly wrong. That story should silence anyone who still claims that over-limited music is just a “matter of taste”. When it’s too distorted for a Metallica fan….
In this article I will tell you how the Loudness War will end, once and for all.
But first, some psychology.
The Pepsi Challenge
In Blink: The Power of Thinking Without Thinking, Malcolm Gladwell reveals an interesting story about the problem of fragile judgments.
We’re all familiar with the Pepsi Challenge: random consumers are asked to judge between two colas in a blind taste-test, and a majority choose Pepsi over Coke.
The Pepsi Challenge was started in the 1970s, and almost everyone in America has seen it. And, yet, decades later, Coke is still more popular than Pepsi.
Many would argue that this proves that clever marketing can convince consumers to choose an inferior product. But marketing studies prove that cola marketing does almost nothing to cause people to switch brands. It just causes people to consume more of their favorite cola. When Coke rolls out a new, effective advertisement, the effect isn’t to convert Pepsi drinkers to Coke. It is to sell more Coke to Coke fans.
It turns out that there is another, more important reason for Pepsi’s superiority in taste tests and inferiority in market share.
Pepsi is sweeter than Coke, and people will typically prefer the sweeter product when given a small taste. In the Pepsi Challenge, people don’t guzzle a 32-ounce Pepsi and then follow with a 32-ounce Coke. They get a small sip of each.
Gladwell then points out that this test is only meaningful if people typically only drink a sip of cola at a time. Of course, anyone who has been to a convenience store in the last thirty years knows that, in the US, the individual serving size for cola is a vat. It turns out that when you ask people to drink a large quantity of cola, a majority do not prefer the sweeter cola after all.
This should all be common sense for anyone upon reflection. Something extremely sweet, or salty, or spicy really zings your pleasure centers. This grabs your attention. But gorge yourself on something particularly sweet, or salty, or spicy, and you’ll feel overwhelmed by it relatively quickly.
Coca-Cola didn’t get that point. Their answer to the Pepsi Challenge was to introduce a product that would beat Pepsi at it’s own game – a product that has become symbolic with Getting It All Wrong: New Coke. New Coke beat original Coke **AND** Pepsi in double-blind sip tests. And yet, it was a complete and utter failure. Based on the science of double-blind taste tests.
Think about it. Isn’t it fascinating that something as “scientific” as a double-blind taste test could actually produce such a flawed outcome? And, yet, there it is. What often seems to be sound science is, in fact, misinformation that drives bad decisions.
Back to the War at Hand
What has this to do with the Loudness War? Everything, it turns out.
When a listener compares two mastering jobs, the first and most obvious difference between the two is loudness. It is a well documented fact that, below the threshold of pain, listeners will almost always prefer the louder version if volume is the only difference (the reasons have to do with the equal loudness contour and other psychological factors).
Engineers have known this for years. When an engineer thinks they’ve got the mix just right, and the artist or producer criticizes some aspect of the mix – say, the vocals aren’t loud enough – the engineer will pretend to twiddle a knob or two and bump up the master fader a couple of dB. “How’s that?” asks the mixer. The client nods. All better now. The mix is the same. The only thing that was changed is that the volume was turned up a little.
Radio stations have also known about the importance of being loud for years. Of course, one reason radio stations compress and limit the music to make it louder is because a hotter signal overcomes background noise better. But the real reason is that when a listener is surfing channels, a loud station is more likely to get them to start listening.
But remember the Pepsi Challenge. It was a valid test, so long as the typical serving size was one sip. Likewise, overcompressed music is often preferable, as long as the listening duration is brief. When music is overlimited and lacks dynamics, it causes listener fatigue. The brain starts treating it like noise, and attempts to filter it. This is stressful on the brain, causes low levels of anxiety, and stimulates the listener to get away from the distressing sound.
So, in a blind test, comparing a few seconds of the same song, listeners will prefer the one that is overcompressed and “hot”. But that loudness will also cause the same listener to be more likely to turn the song down (or off), probably before the song is over.
Over the last couple of years I have involved myself much more closely in the process my clients undertake when selecting mastering engineers. Most attempt to be rigorous and scientific about their selection, and A/B the music to compare it.
The problem is, like the Pepsi Challenge, the natural A/B test – which seems like such a good way to compare two mixes – is a flawed test. Listeners pick a song, and listen to a few seconds of Master 1, then a few seconds of Master 2, then back to 1, then back to 2, and so forth. It might tell them which version “jumps out at them more” but isn’t likely to tell them which one a listener will still be listening to after an hour.
Solutions Proposed, Denied
Since 2002 many solutions have been proposed to deal with the Loudness War. They have almost all fallen on deaf ears.
Various groups like TurnMeUp! offer some sort of certification for mastering engineers and the material they produce. To qualify, mastering engineers must demonstrate that their product meets a minimum qualification for dynamic range. Certified engineers / product are entitled to use a label on their product certifying that it meets the groups standards. Other groups promote educational seminars and advertising as a solution.
The Pleasurize Music Foundation has an aggressive and comprehensive campaign that includes a special dynamic range rating system, a freeware metering tool that can be used to analyze dynamic range and produce a certified numerical rating, and an array of educational and promotional material to try to change the music world. Pleasurize Music also hopes to influence the development of a Blu-Ray audio standard.
Although I disagree with many of their strategies and ideas, ProRec supports all of these organizations. We support any organization attempting to bring sanity back to the field of mastering.
But none of them are doing any good. Nor will they.
The problem is that they all focus on attempting to convince the listener that Louder is Not Better. I suggest that the listener is not to blame, and thus the problem cannot be solved by trying to convince the listener of anything. This is not a problem driven by the consumer. Consumers have clearly spoken: they don’t like the overcompressed music.
Besides, the only way for a consumer-based approach to work would be if different forms of mastering were provided – say, the TurnMeUp! version and the TurnMeDown! version – and consumers were allowed to choose. As long as only one version is offered, which is (and should be) the case for 99.9% of all recordings ever made, consumers don’t have a choice so there’s no point putting a label on the disc. It doesn’t differentiate the product.
Nor is this a problem driven by the artist. Or, rather, if the artist is to blame, then it isn’t a problem. After all, the product reflects the artist’s intentions. When Billy Corgan said of Zwan’s Mary Star of the Sea, “We set out to make the loudest fucking rock and roll album that was humanly possible,” then he is making an artistic choice, just like the choice to scream his lyrics instead of croon. It’s a totally valid choice and I support any artist who makes that choice, even if I find the music annoying and don’t want to listen to it, which sometimes happens.
So trying to fix this problem by addressing education, catchy slogans, and cool certification labels at artists and mix engineers is a complete waste of time. If the artist and mix engineer wanted it annoyingly loud, then such information, education, and certification completely misses the point. And if they didn’t want it annoyingly loud, well, chances are that they were cut out of the loop anyway.
No, the Loudness War is not driven at the top of the production pipeline (by the artist) or at the bottom of the pipeline (by the consumer). It’s being driven somewhere in the middle.
The Label is the Culprit. The Mastering Engineer, Too.
In Over the Limit I tried to walk a politically correct line, and lay the blame for the loudness war at the feet of the record labels. To be sure, they’ve earned most of the blame, if only because they are ultimately responsible for the product. But individual mastering engineers are also to blame.
Let’s explore further.
I experience the Loudness War on a regular basis, as finished mixes leave my studio and go into the hands of mastering engineers, sometimes coming back horribly disfigured. Over time I have discovered mastering engineers who I think “get it” and I steer record labels and clients their way, but sometimes other engineers get their hands on my mixes and try to win the Loudness War, and it makes me cringe – or cry – every time.
Note that nobody gave these engineers the direction to try to make the loudest fucking record ever made. They just took it upon themselves. We can’t blame the record label for that.
Strangely it is often a very hit-or-miss venture. When my studio sent a decidedly 70’s-retro album for mastering at one of the world’s most distinguished mastering facilities, it came back wrecked beyond all belief. In case you think it’s a matter of my Louder-is-not-Better taste, let me point out that there were, in fact, technical problems with the mastering, including the fact that while the RMS level on the disc was around -8 dBFS, the peaks were limited at -2.3dBFS. If you understand the preceeding terms, then you know that the master was screwed up. After substantial back-and-forth with the engineer, and a couple of re-dos, it came back beautiful.
What happened? Did the engineer suddenly grow new ears and better taste? Did he switch off the “Suck” button? Hard to say. Warning lights were flashing down at Quality Control, for sure.
But here’s the deal – the artist, the label, and the producer were ready to sign off on the original master! They couldn’t tell there was a problem. I could tell the problem in literally five seconds of listening. Hell, I could look at the VU meters and tell that there was no way I would accept this mastering job. Fortunately for all involved, I had inserted myself deeply into this project and called “veto” on the master before the label rushed it into production.
Such vetoes by the mix engineer are rare. When I work with individual artists, they’ll often ask my opinion on the masters, but then they’ll make the final call. When I work with a label, it has been my frequent experience that neither the artists, engineers, or producers are involved in the mastering process. The label typically has someone that they use, the project goes to that person, and then to manufacturing.
And forget about trying to target mastering engineers with “education”. Good mastering engineers have known about the problem for years and hate it, but are being driven to produce ever-louder CDs anyway. Bad mastering engineers are, well, bad, and don’t want to hear your so-called “education”.
A good mastering engineer groks the artistic intention of the source material and tailors the mastering to suit the material. For example, I’ve sent several projects to Dave McNair (now with Sterling Sound) with absolutely stellar results. One example is John Lefler’s masterpiece powerpop solo debut Better by Design, which was a labor of recording and mixing love, where Dave took what I considered to be a perfect recording and made it even more perfect. I point all this out because Dave McNair has also mastered some Seriously Loud Shit, when it was appropriate. In other words, he doesn’t view loudness – or dynamics – as ends in themselves, but rather understands that compression is one tool among many to make a recording “all that it can be.”
Now, it’s entirely possible that Dave’s work would disqualify him from being part of some of these LouderIsNotBetter cliques, because he does sometimes put out some Seriously Loud Shit. What an insult to Dave that he would be excluded from such a group. How could such an organization maintain credibility?
Most people have just given up on winning the Loudness War, maintaining that it’s just unwinnable – that it is an unavoidable consequence of a broken recording industry and failed consumer market who prefers MP3s to CDs and DVD-As.
I have reached a far different conclusion.
The End of the Loudness War
I am prepared – not to continue to Fight the Good Fight against overcompressed mastering – but instead to lay down my arms and go home. The Loudness War is not only winnable, but it has already been made moot, although the fighting will continue for several years and it may be another decade before “Overcompression for Loudness Sake” stops altogether.
The Loudness War is going to end because it has been rendered technically obsolete. Not thanks to some propellerhead technology like Blu-Ray. No, the technology that is going to end the war has been around for years and is already in widespread acceptance. The Loudness War is not going to end thanks to a major educational campaign. No education will be needed. Fancy logos and certifications might be helpful but they aren’t necessary, and the ones we have now aren’t the right ones.
No, if all we do is sit back and do absolutely nothing, this problem will go away within the next decade. Because that’s how long it’s going to take for the compact disc to become obsolete. Not to be replaced by some new DVD or Blu-Ray audio disc, either, but replaced by the very technology that has been making physical media obsolete for a decade: downloadable music.
In short, the MP3.
As far back as 1999 – two years before the iPod made the MP3 a truly viable alternative – I was laughing at the pro audio industry for trying to push the DVD-A. DVD-A was trying to solve a problem that didn’t exist – the “CD’s don’t sound good enough” problem. The explosion of MP3 onto the music scene demonstrated that not only were CDs just fine for most listeners, but actually they were probably better than they needed to be, since the vast majority of consumers – myself included – either couldn’t tell the difference between a well-encoded MP3 and a CD or just didn’t care.
In 1999 most (if not all) pro audio evangelists were busy decrying the lousy sound of MP3s (which only meant that the encoding was lousy or the bitrate was too low) and were terribly excited about the possibility of 24/96 DVD audio, a format which flopped worse than the 8-track. Meanwhile, the technically inferior MP3 was gaining widespread acceptance and the technically superior CD was becomingly increasingly compressed and distorted thanks to the Loudness War.
And now, in a fantastically ironic twist, the problem has become “CDs don’t sound good enough” (due to the Loudness War) and the solution is a technically inferior product, the lowly MP3. In fact, to add to the twist, if DVD-A had caught on, the Loudness Wars would continue unabated.
At this point, you’re thinking, “He’s smoking crack. How can this be?”
The answer is less about the format than the player. The ubiquitous iPod, its various MP3 playing cousins, and their associated software like iTunes are the true answer to the problem.
The Format is (Not) the Answer
The downloadable format is going to end the Loudness War, but not because it is a magic format. Instead, it’s the answer because it has fundamentally changed the way we listen to music.
CD jukeboxes have been around for two decades. But even the best and biggest CD jukeboxes pale in comparison to a nicely-stocked iPod. The typical CD jukebox plays 1000 songs. The typical iPod plays over 10,000. The typical CD jukebox shuffles songs with a 5-second gap. The typical iPod shuffles songs with no gap. CD jukeboxes hold entire albums. iPod listeners often collect singles – meaning, more music from different and diverse sources. So listeners hear lots of very diverse music back-to-back. And the typical iPod user may spend hours creating playlists – the 2000’s answer to the 1980’s mix tape – which just don’t happen with a CD jukebox.
In the 1980s, the responsible mix tape creator would watch the VU meters on the tape deck and level the volumes of songs as he recorded the tape. The iPod / iTunes listener cannot do that when creating a playlist. Fortunately, they don’t have to. Because in 2001 – the same year the iPod was introduced – David Robinson introduced Replay Gain, the first volume leveling algorithm for MP3s. And now most MP3 library software including iTunes will automatically perform volume leveling, analyzing the audio and writing normalizing value into the MP3’s metadata – a technique that does not affect the fidelity of the audio itself. And now, most playback devices are designed to use that metadata and adjust playback volume accordingly.
What Replay Gain / volume leveling technology has done is render the Loudness War completely obsolete: when mastering engineers crank up the compressors and limiters, the audio doesn’t get louder at all! It just loses dynamics.
So, interestingly, the Loudness War ended in 2001, the year that Replay Gain and the iPod were introduced. These two technologies – massive MP3 jukeboxes combined with volume-leveling software – render the Loudness War null and void.
Of course, practically all music going to mastering engineers is still being mastered for CD. And as long as the target media is CD, then the Loudness Wars will rage. But CDs are doomed. MP3s (and other downloadable formats) have clearly demonstrated that they are here to stay, blending a combination of low price, instant availability, utter portability, and very good (if imperfect) sound quality. The future will see bitrates increase and encoding improve – including the lossless formats now showing up – which is a good thing for music. But music as an individually-packaged, physical widget is a dinosaur. And the death of that dinosaur means the end of the Loudness War. For good.
At some point, hopefully in the near future, mastering engineers will stop considering the 16 bit, 44.1 KHz CD the “target” and will start mastering for downloadable formats. I propose that the mastering process of the future will include some sort of plug-in or device that mimics the sort of volume analysis performed by Replay Gain and adjusts the playback volume for the mastering engineer in real time. In other words, the mastering engineer sets the “master volume” on his console and the playback volume stays at that volume regardless of what the engineer does in the digital or analog signal path – even if it means cranking up the gain on his outboard LA2A all the way to 11.
Mastering for a Volume-Normalized World
It’s beyond the scope of this article to propose a design for such a device. I’m sure someone out there is making it already. The key isn’t the magic device. The key is mastering for a volume-normalized world. When you, as a mastering engineer, realize what that limiter is going to do to your track when the playback software analyzes it and turns it down 11 dB, it will change the way you master. Forever.
But first we’re going to have to get rid of all those pesky CD players in car dashes, home-entertainment systems, and – most importantly – in the offices of record industry mooks.
It’s going to take time. First of all, users need to become convinced that volume normalizing is a good thing, and that isn’t going to happen until volume normalizing works better. Replay Gain is slow. It doesn’t always work correctly. Some volume levelers actually rewrite the MP3 file at a lower volume (instead of just tweaking the metadata) resulting in poorer fidelity. As a result some people don’t trust their software and won’t use this feature.
The thing is, when it works, it really makes your MP3 collection a lot more listenable. When you can switch from Bob Dylan to Crystal Method without having to touch your volume knob, you become a believer instantly. When it works, it works well, and it adds a lot of value to your collection. The bigger (and more varied) your collection, the more valuable this technology is. From personal experience, I have tens of thousands of songs on my computer, and I use Media Monkey (which I highly recommend) to manage my library. My entire collection has been volume normalized. I couldn’t live without it.
I was an MP3 early adopter. It was a miracle for me. I had a sprawling CD collection, and I’m bad at keeping up with physical widgets. But I am a whiz with a computer, and in 1999 I had five computers, so one week it just made sense for me to rip every CD I own onto what was at the time the largest hard disk I could buy. After a week-long CD-ripping frenzy I had my entire collection available at my fingertips, categorized, and easily transported. I have handled – and purchased – very few CDs since then. Instead I buy online through iTunes or (preferably) Amazon. What CDs I do buy come home, get ripped, and go on the shelf. Folks, I’m an audiophile. I love MP3.
Since I was an early adopter, I forget that most people still live in a CD+MP3 netherworld. I’ve had my car wired for my iPod since 2000. Although many people today have iPod-capable car radios, most don’t. Most people use their MP3 players as portable listening devices – like a Walkman with a really long tape – instead of megajukeboxes to contain all their music. But that is all rapidly changing and the endgame – the complete obsolescence of the CD – is inevitable and in sight.
And over time, everyone will level the volume of their MP3 downloads, because it works and makes the jukebox concept work better. And when making a record louder doesn’t actually make it louder, but just more compressed, people will master differently. Eventually, mastering engineers will start mastering for MP3 instead of (not “in addition to”) mastering for CD.
Perhaps a standard will emerge, and downloaded music will come with the volume normalization level already set in the metadata. Perhaps mastering engineers will be asked to set that metadata, much as they’re asked to write CD-Text today, because it will just make sense to produce final work output that’s ready for download.
When Will It End?
How long will this take?
I predict another decade.
Consider the transition from vinyl to CD as a case in point. The CD became commercially available in the early 1980s. But for about a decade, mastering practices remained virtually unchanged. You would “master” to tape, and then “transfer” to digital. It wasn’t until the mid-1990s that mastering practices that exploited the digital domain (specifically, heavy application of brickwall limiting) became prevalent.
I would venture to say that the transition from CD to MP3 will be somewhat more gradual than the transition from vinyl to CD was. For the average consumer, CD represented an obvious and substantial improvement from vinyl: smaller, better sound quality, more durable, more resistant to damage. Vinyl would last for years, but a CD would last a lifetime.
By comparison, MP3s are somewhat more of a mixed, incremental improvement over CD. Sound quality is diminished. Portability is greatly improved. Costs are lower, but still high (especially considering there is no manufacturing, warehousing, or shipping involved). MP3s are more durable than a CD in one aspect (they are “virtual” and can be copied or backed up) but also more fragile (drop your iPod in the lake without a backup, and you’ve lost everything). Whereas playing a CD was as simple as playing a record or a tape, MP3s are still complicated for non-technical users and require a computer and at least some computer skill. And a lot of people still like the idea of a physical product and prefer to buy their favorite music on disc.
Retailers had already bailed on vinyl by the mid 1990s – ten years after the CD had become widely available. MP3 as a music format has been available since the end of the 1990s, and ten years later, the CD – while clearly waning – is still quite viable. So I think it’s safe to conclude that while the CD is ultimately doomed as a mass-market music media, the transition from CD to downloadable media will be a slower one than the transition from vinyl to digital disc.
So what does this mean for mastering?
Back in the 1980s, mastering changed from being a process of getting the signal onto vinyl and became a process of getting a signal onto CD. The difference in the target format drove a change in mastering practices which, over the course of a couple of decades, became the Loudness War we face today. It took about seven years for the new format to substantially replace the old and another seven or so years before the industry changed to fully exploit the new format’s capabilities and weaknesses, leading to the Loudness War.
Now a new target format – downloadable media combined with jukebox players – has gained acceptance. It hasn’t eliminated the CD format, but a tipping point has been reached, and the CD’s days are numbered. Mastering engineers are already marketing their services towards the new media. And over the coming decade, mastering engineers will adopt new practices and technologies to exploit the capabilities of this format.
And the Loudness War will end forever.
The question then will simply be – do we need to buy yet another copy of the White Album?