Mastering in the present

It probably goes without saying how special the science of mastering seems to the uninitiated. In this article, I can't even begin to explain how complex and yet how simple the process sometimes actually is. The reason for this article is that I have recently had the opportunity to exchange ideas with many "experts" who have spent the last decades in a cocoon. In other words, they have acquired extraordinary knowledge of mastering, but have stopped following modern approaches and have been left behind. The following article will describe highly technical and highly historical information, so only those who are open to the world should read it and even if they don't agree with me on everything, they might not spit on what I write here.

If you look at the history of the development of mastering, what was considered the mastering process in the 1950s is almost irrelevant in today's world. The main problem back then was how to transfer the signal to vinyl in a way that sounded acceptable on vinyl. Perhaps the most important moment of change was that the drive for purely technical implementation brought with it the inclusion of creativity in the audio material. EQ, compressor, Mid-Side, Dolby, all became important props in the mastering engineers' science. With the advent of digital technology, the process of mastering has changed, expanded and transformed, just as it did after the early days.

And this change is still happening today. Unfortunately, there are those who, lagging behind the times, look askance at today's mastering solutions and are unable to accept the changes that come with age. Obviously some of the criticisms are valid, but even these criticisms have their flip side. Let me give you an example.

Since the mid-twenties, we have heard a drastic increase in volume in most music. The older teaching was that the sound material should be kept dynamic, that it is unnecessary to push extreme volume out of the material, because it makes the dynamics worse. However, two keen mastering experts have discovered that if you increase the volume, it will overpower all other music on the radio, in the car, in the disco, and the listener will consider it as the authoritative music. At first the engineers handled this delicately, but after a while, because everyone wanted to increase the volume, the industry was caught in a vicious circle. Maximising the volume reached a limit from which only extra tricks could achieve good results. Purely digital mastering technology was not yet ready for this problem, digital mastering software was simply incapable of increasing the volume without audibly destroying the music. So at that time, expensive analogue compressors were clearly outperforming digital software. Perhaps the first software was the Waves L2 maximizer, which burst into the digital consciousness in terms of volume boost, but it was still no match for the really expensive compressors and limiters. You can look at this from two angles. Why couldn't things stay the same? With all this volume boost madness, we've just added to the problems. The other view is that it has brought with it an unstoppable process of progress, i.e. the premise of progress is that there must be a problem which, once solved, can be taken to the next stage. That problem is to maximise the volume in a digital environment to be competitive with analogue systems. Perhaps the reason many people are squeamish about digital solutions is that it is significantly cheaper than expensive target hardware. In light of this, some people who did not yet fully understand the technical processes have begun to experiment with software. To this day, you can hear over-maximising music that has been spoiled in the name of increasing volume. Let's not be hypocritical, those who actively master all the time have most certainly fallen into this trap themselves.

In the case of pop music, mastering engineers have constantly had to balance between increasing the volume and producing a sufficiently dynamic sound. You might say that this is a bad thing, and I have to agree, but there are two sides to every coin. If this phenomenon does not exist, software designers will not set out to find more convincing solutions. If we have bikes, why not motorbikes? And this brings us to the world of iZotope Ozone.

The fifth variant of the Izotope Ozone has brought yet another change that will determine the direction of the road for many years to come. Most of all, it is the software that showed the future even then, but no one knew that it would be the direction. Ozone's IRC III technology was perhaps one of the best solutions at the time for how to maximise volume without compromising the dynamic range so much that it could be heard clearly.

Loudness war also brought a host of other innovations, with their advantages and disadvantages. Staying with the Ozone, splitting the loudness enhancement into multiband sections is another technology that can have advantages and disadvantages. By increasing the sense of spaciousness, we increase the sense of volume, but when applied without thinking, phase problems arise. But I don't think this is the software's fault, it's the user's fault if they do something wrong and they have to decide how far they can go. A conventional expert would say that this is bullshit and completely pointless. Yes, that was true in the 1980s and even in the 2000s, but today that argument no longer holds water. Whether we like it or not, mastering processes and solutions are now changing according to what users want, not what a mastering professional would want in an ideal situation. Of course, one can distance oneself from this process and reject it if all clients consistently and expertly mix their music, but in my experience this is very rare.

The current modern trend is to use any available technique to master material, as long as it benefits the mixed material. The classical view, however, is that mastering only adds a tiny flavour to the final material. I don't want to argue with anyone about which is better, as I personally believe that both approaches are good, given the circumstances.

In a way, I understand the complainers, as a two hundred dollar software is much more affordable than a ten thousand dollar Farchild compressor. That's why the unhappy unhappy are able to use it, and that means a lot of misguided stuff is put on the market. The exclusivity of the profession is beginning to be lost. However, the software gives you as much opportunity to get it right as it does to get it wrong. This is not the fault of the developers.

As it stands, digital mastering has not yet reached the full capabilities of analogue hardware. If we look at compression capabilities alone, it has reached the quality of analogue technologies, but it cannot yet faithfully reflect the flavour that an analogue device can give to sound. There are attempts, see UAD, but even this is not perfect. Some of the digital developments may appear to be parochial blindness, but they are by no means without purpose. I only have to think of Ozone's Tonal Balance software, or Auto EQ for approximating linearity in my own software. To a trained professional these two functions may seem superfluous, to an untrained one they are a great help. But the truth is that even a trained professional can benefit in some cases. If, for example, you hear that EQ doesn't make the material worse, but you need to approximate the sound image of a reference material, a reference standard, why not use it to speed up your work? So it's not really a question of what to use, but what to use when, how. Recently, it has become fashionable to use linear EQ. For those working with traditional methods, using linear phase EQ is redundant, and analogue instruments are incapable of working with it anyway. And those in the digital world mistakenly think that this can solve all phase problems. In reality, however, I can only repeat here that it can be both advantageous and disadvantageous.

Digital technology is currently on the road to automation. There are already examples of it that make it only a few clicks to create a master. Before anyone stones me, it's obvious that humans can't be left out of the process yet, but look at the Ozone analyser functions, or the automated service from LANDR. They're not perfect, but they're working and they're only going to get better. Should I cry about that? I don't think so.

You can see that the working style of those in the mastering field is constantly changing, if you follow the call of the modern age. In the digital field, it is clearly the market that decides the legitimacy of the existence of some software, so there is a constant flow of software that offers something new or a different approach. I don't think there is anything wrong with experimentation. And that includes solutions that weren't in a mastering professional's toolbox before. Such as transient enhancement solutions. Anyone who makes something really useful will have the opportunity for further development, and anyone whose product can be replaced will go in a different direction.

As far as my own software is concerned, I have received a lot of criticism as to why I make this software, why I present it as something that can do everything, when it cannot. The very question is wrong. I don't claim that you can do everything with it. I am saying that in a given situation this or that software may be more suitable than another. On the other hand, in Hungary, although there used to be music software development companies, they have all died out, leaving only individual developers who do it out of passion. That's why I develop, and also to learn, and maybe in a lucky moment I'll come up with something different. If it doesn't work out, that's fine.

MOthers complain that the software's features are very far removed from traditional mastering features. Normally the mastering EQ only needs a few minimal changes, why add 20 dB of space there? I have two answers to that. One reason is that I allow the user to use the software not only for mastering, but also for mixing if he wants. But my other explanation is much more complex than the first one.

In a normal case, only minimal changes are really needed during mastering. But unfortunately the Hungarian experience shows something quite different. Typically, the Hungarian profession is split into two layers. The most outstanding composers work with the most outstanding mixing specialists and mastering the material of these people is a walk in the park or a medium tour. A mastering professional used to this kind of mixing can say with full authority that the mixing that already needs trickery is unfit to suffer from mastering. The Hungarian reality, however, is as follows. The orchestra is saving its money with great difficulty to get to a studio that it can afford. In the lucky case the mixing will be usable, in the unlucky case there will be no money to remix it in another studio. In that case, the mastering specialist will have to try the impossible. I think many people know very well how difficult the situation is for Hungarian musicians. Even if a poor bass player wants to play a five-string Warwick guitar, he may be playing a $200 four-string for years because he has no money, not even for a proper mixing, not even for a proper guitar. I'm not going to say to a band like that, 'too bad, you should have gone to a better studio'. It's not going to be my main thought when I raise the EQ by 8 dB or so that this is going to suck. If anything is better than it is originally, it does need to be tricked. Be that anything, if it improves the overall listenability of the song, then it should be applied. Of course, there are situations where nothing really helps. But in a specific situation, digital technology can be a huge help if you know what to use, when to use it and understand the process and its negative and positive consequences.