Making tracks louder

This forum is for discussing Reason. Questions, answers, ideas, and opinions... all apply.
Thousand Ways
Posts: 252
Joined: 18 Jun 2015
Contact:

18 Nov 2015

selig wrote:If you could put a SECOND meter at the input or even in the insert, you would still see a signal there even with the channel fader all the way down or the channel muted (because the fader and mute is on the OUTPUT of the channel).
The idea of two faders – one showing signals in, the other showing signal out – would seem to make perfect sense. Why doesn't Reason feature this? Is it purely because the result would look too cluttered? And is there a rack extension that shows signal in + signal out simultaneously?

User avatar
selig
RE Developer
Posts: 11747
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

18 Nov 2015

Thousand Ways wrote:
selig wrote:If you could put a SECOND meter at the input or even in the insert, you would still see a signal there even with the channel fader all the way down or the channel muted (because the fader and mute is on the OUTPUT of the channel).
The idea of two faders – one showing signals in, the other showing signal out – would seem to make perfect sense. Why doesn't Reason feature this? Is it purely because the result would look too cluttered? And is there a rack extension that shows signal in + signal out simultaneously?
You mean two METERS, right? There already is an input gain knob and an output gain slider - both are technically faders (the term "fader" is often shortened/mistaken for "Linear Fader") as a fader is something used to fade the audio level. But both also have "gain" (another term often simplified to mean "level").

Shameless self promotion mode: ON!
This is one huge reason I created Selig Gain and specifically it's peak hold meter. You don't need to use one of these on every channel, you only need to use it to check the level coming into that channel. I prefer to set levels from the source where possible. For instruments this would be done at the master volume on the last device (often a line mixer) in a Combinator. For audio I set it when I record, or use clip gain on tracks coming from beyond my control.

Note: many of the FSB patches (especially the newer ones which I was a part of creating) use a standard of peaking at around -12 dBFS when played as intended: bass and lead patches played monophonically, pads etc played with 4-6 note chords.

Also, when recording audio tracks the meter in the sequencer shows green LEDs up to -12 dBFS, and yellow above that. The manual even suggests you keep your peaks around -12 dBFS, which is partly why I also suggest that (but I was doing this before Reason added audio FWIW).

All this to say that if you have addressed your levels at the source, there is little need for an input meter or any gain adjustment at that point. It is only when adding compression/EQ etc. that you may inadvertently increase the audio level and need to adjust accordingly to keep the output of those processes at your reference level. This also allows A/B comparisons (you ARE doing A/B comparisons, right?) much easier to perform and keeps you from simply liking the "louder" version (louder always sounds better).

In short, there are many reasons to adopt a reference level for your audio signals, including things I've not yet mentioned such as microphone preamps and A/D converters operating more at their nominal levels (sweet spots) when not trying to keep them as hot as possible!

Hope this is all making sense, and I also hope others with similar experience will chime in here and share their views and experiences, even if they are counter to mine. I'm not claiming to be the ultimate authority here, just sharing what I've learned, what insights I have gained from working with the Props, and some common knowledge as best practices as I see it. As always, there are other ways of working that may produce similar results!
:)
Selig Audio, LLC

avasopht
Competition Winner
Posts: 3948
Joined: 16 Jan 2015

19 Nov 2015

JerrelTheKing wrote:@selig I've loaded in reference tracks with some commercial releases from my genre hip hop
A little exercise for you: load some pure 808 sounds and compare to what you hear in Drop It Like It's Hot.

The sound you are after will determine what you need to do.

It helps if you've developed an ear for dynamics and perceiving levels/types of distortion (and also which frequencies are distorting). It's a bit like learning to detect the differences between synth sounds.

Distortion plays a massive role and can be introduced in a variety of ways. From compression and limiting to the use of tube amplifiers and direct distortion effects.

Also you want to think about what it is that's louder. Is it the percussive sounds? Is it the bass? Or even all of the above? If the later you will hear ducking. So analyse what us going on specifically.

User avatar
raymondh
Posts: 1777
Joined: 15 Jan 2015

19 Nov 2015

gak wrote:Now, onto "loudness" :mrgreen:

Here's a fact that people get way to technical on that is often misunderstood: The END result is more important than the "mix" result.

In example, If you are "mixing" to get the loudest possible track, your doing it wrong. In the end, you should mix with a mindset of equal value and then apply "loudness" afterwards. Example again, OZONE.

I'm not the leading expert, but I have enough knowledge to understand that getting the "mix" right and then applying "loudness" after the fact is by FAR the most desirable result. It's what the "experts" do, and far be it for me to argue with them.

f... all with getting it loud to start. Do as the manual says, get the mix awesome w/o worrying about loudness and then apply that later.
+1

If you normalise your final track, it will only normalise based on maximising the peaks. If most of your track is balanced but say one of the FX is too loud or there's a few sounds playing at the same time that sum up to a high level, the rest of your track will never get the opportunity to be loud unless you clip (or compress/limit) the transient peaks. Sometimes it's hard to find out what the problem is, and what I find useful is to export stems to audio then look at them individually in Audacity or some other visualisation of the whole track. Very often that shows up a track that needs to be 'tamed' and if it doesn't then that might suggest multiple peaks are hitting at the same time.

The other point to note is the perception of loudness. I am working on a track at the moment that I can't make loud without clipping (haven't gotten to the mixing surgery on it yet) ---- but when I turn it up to sound as loud as another track might be, it hurts my ears. So there's no question that while it isn't a loud track - the sound energy/pressure is there. But that sound energy/pressure is not contributing to the listening experience in a good way, so I need to find out what to cut, in order to let everything else lift. It might be frequencies, it might be the level of certain tracks, etc.

Then as gak says, once I have it all balanced out in the mix, and only then, the mastering polishing can begin.

User avatar
selig
RE Developer
Posts: 11747
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

19 Nov 2015

raymondh wrote:
gak wrote:Now, onto "loudness" :mrgreen:

Here's a fact that people get way to technical on that is often misunderstood: The END result is more important than the "mix" result.

In example, If you are "mixing" to get the loudest possible track, your doing it wrong. In the end, you should mix with a mindset of equal value and then apply "loudness" afterwards. Example again, OZONE.

I'm not the leading expert, but I have enough knowledge to understand that getting the "mix" right and then applying "loudness" after the fact is by FAR the most desirable result. It's what the "experts" do, and far be it for me to argue with them.

f... all with getting it loud to start. Do as the manual says, get the mix awesome w/o worrying about loudness and then apply that later.
+1

If you normalise your final track, it will only normalise based on maximising the peaks. If most of your track is balanced but say one of the FX is too loud or there's a few sounds playing at the same time that sum up to a high level, the rest of your track will never get the opportunity to be loud unless you clip (or compress/limit) the transient peaks. Sometimes it's hard to find out what the problem is, and what I find useful is to export stems to audio then look at them individually in Audacity or some other visualisation of the whole track. Very often that shows up a track that needs to be 'tamed' and if it doesn't then that might suggest multiple peaks are hitting at the same time.

The other point to note is the perception of loudness. I am working on a track at the moment that I can't make loud without clipping (haven't gotten to the mixing surgery on it yet) ---- but when I turn it up to sound as loud as another track might be, it hurts my ears. So there's no question that while it isn't a loud track - the sound energy/pressure is there. But that sound energy/pressure is not contributing to the listening experience in a good way, so I need to find out what to cut, in order to let everything else lift. It might be frequencies, it might be the level of certain tracks, etc.

Then as gak says, once I have it all balanced out in the mix, and only then, the mastering polishing can begin.
If that works for you, then fine I guess. It's never worked for me. Always sounds crushed that way. By mixing with the end result in mind you get better results IMO. Most great mixes need little mastering to make them "loud". If that's the case, you're mixing wrong. Mastering can make it "louder", but only if it's a good mix to begin with.

I've always believed that if mastering changes your mix THAT much, then you're mixing wrong.
:)
Selig Audio, LLC

User avatar
raymondh
Posts: 1777
Joined: 15 Jan 2015

20 Nov 2015

selig wrote:
raymondh wrote:
gak wrote:Now, onto "loudness" :mrgreen:

Here's a fact that people get way to technical on that is often misunderstood: The END result is more important than the "mix" result.

In example, If you are "mixing" to get the loudest possible track, your doing it wrong. In the end, you should mix with a mindset of equal value and then apply "loudness" afterwards. Example again, OZONE.

I'm not the leading expert, but I have enough knowledge to understand that getting the "mix" right and then applying "loudness" after the fact is by FAR the most desirable result. It's what the "experts" do, and far be it for me to argue with them.

f... all with getting it loud to start. Do as the manual says, get the mix awesome w/o worrying about loudness and then apply that later.
+1

If you normalise your final track, it will only normalise based on maximising the peaks. If most of your track is balanced but say one of the FX is too loud or there's a few sounds playing at the same time that sum up to a high level, the rest of your track will never get the opportunity to be loud unless you clip (or compress/limit) the transient peaks. Sometimes it's hard to find out what the problem is, and what I find useful is to export stems to audio then look at them individually in Audacity or some other visualisation of the whole track. Very often that shows up a track that needs to be 'tamed' and if it doesn't then that might suggest multiple peaks are hitting at the same time.

The other point to note is the perception of loudness. I am working on a track at the moment that I can't make loud without clipping (haven't gotten to the mixing surgery on it yet) ---- but when I turn it up to sound as loud as another track might be, it hurts my ears. So there's no question that while it isn't a loud track - the sound energy/pressure is there. But that sound energy/pressure is not contributing to the listening experience in a good way, so I need to find out what to cut, in order to let everything else lift. It might be frequencies, it might be the level of certain tracks, etc.

Then as gak says, once I have it all balanced out in the mix, and only then, the mastering polishing can begin.
If that works for you, then fine I guess. It's never worked for me. Always sounds crushed that way. By mixing with the end result in mind you get better results IMO. Most great mixes need little mastering to make them "loud". If that's the case, you're mixing wrong. Mastering can make it "louder", but only if it's a good mix to begin with.

I've always believed that if mastering changes your mix THAT much, then you're mixing wrong.
:)
That's what I was trying to say actually - get the mix right.
Exporting the stems was just to diagnose the problem so you could go back into the mix and fix it at source.

Having said all that; I think the only reason I do that is because I *still* struggle with how to use metering effectively. You've put heaps of great information online in these threads and you have made tools like Gain (which I also find excellent for level automation but keeping the track faders free) etc, but I still find it a bit confusing on how to effectively use the meters to make mixing decisions.

To your point on squashing: How to effectively use a compressor is still an enigma to me. I know how they work in theory but if you were to ask me what compression or limiting I think I should use to tame a snare drum that sounds good but is screwing up the mix (and if you turn down the level it disappears in the mix) --- I wouldn't have a clue.

User avatar
selig
RE Developer
Posts: 11747
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

20 Nov 2015

raymondh wrote:
selig wrote:
raymondh wrote:
gak wrote:Now, onto "loudness" :mrgreen:

Here's a fact that people get way to technical on that is often misunderstood: The END result is more important than the "mix" result.

In example, If you are "mixing" to get the loudest possible track, your doing it wrong. In the end, you should mix with a mindset of equal value and then apply "loudness" afterwards. Example again, OZONE.

I'm not the leading expert, but I have enough knowledge to understand that getting the "mix" right and then applying "loudness" after the fact is by FAR the most desirable result. It's what the "experts" do, and far be it for me to argue with them.

f... all with getting it loud to start. Do as the manual says, get the mix awesome w/o worrying about loudness and then apply that later.
+1

If you normalise your final track, it will only normalise based on maximising the peaks. If most of your track is balanced but say one of the FX is too loud or there's a few sounds playing at the same time that sum up to a high level, the rest of your track will never get the opportunity to be loud unless you clip (or compress/limit) the transient peaks. Sometimes it's hard to find out what the problem is, and what I find useful is to export stems to audio then look at them individually in Audacity or some other visualisation of the whole track. Very often that shows up a track that needs to be 'tamed' and if it doesn't then that might suggest multiple peaks are hitting at the same time.

The other point to note is the perception of loudness. I am working on a track at the moment that I can't make loud without clipping (haven't gotten to the mixing surgery on it yet) ---- but when I turn it up to sound as loud as another track might be, it hurts my ears. So there's no question that while it isn't a loud track - the sound energy/pressure is there. But that sound energy/pressure is not contributing to the listening experience in a good way, so I need to find out what to cut, in order to let everything else lift. It might be frequencies, it might be the level of certain tracks, etc.

Then as gak says, once I have it all balanced out in the mix, and only then, the mastering polishing can begin.
If that works for you, then fine I guess. It's never worked for me. Always sounds crushed that way. By mixing with the end result in mind you get better results IMO. Most great mixes need little mastering to make them "loud". If that's the case, you're mixing wrong. Mastering can make it "louder", but only if it's a good mix to begin with.

I've always believed that if mastering changes your mix THAT much, then you're mixing wrong.
:)
That's what I was trying to say actually - get the mix right.
Exporting the stems was just to diagnose the problem so you could go back into the mix and fix it at source.

Having said all that; I think the only reason I do that is because I *still* struggle with how to use metering effectively. You've put heaps of great information online in these threads and you have made tools like Gain (which I also find excellent for level automation but keeping the track faders free) etc, but I still find it a bit confusing on how to effectively use the meters to make mixing decisions.

To your point on squashing: How to effectively use a compressor is still an enigma to me. I know how they work in theory but if you were to ask me what compression or limiting I think I should use to tame a snare drum that sounds good but is screwing up the mix (and if you turn down the level it disappears in the mix) --- I wouldn't have a clue.
First of all, I don't use meters to make mixing decisions (or maybe I've misunderstood you here). I use meters to make level decisions, then mix from there - but again, maybe I'm talking about something different?

Compression can accomplish different things. It can tame a sound or it can make it wild. It can reduce dynamic range, or it can increase it. It can clean up a sound, or make it dirty as hell. The variables are selected by first selecting the specific device (each compressor can sound quite different), and second by choosing the settings. Compression could be it's own thread - maybe a good idea?
:)
Selig Audio, LLC

User avatar
raymondh
Posts: 1777
Joined: 15 Jan 2015

21 Nov 2015

selig wrote: Compression can accomplish different things. It can tame a sound or it can make it wild. It can reduce dynamic range, or it can increase it. It can clean up a sound, or make it dirty as hell. The variables are selected by first selecting the specific device (each compressor can sound quite different), and second by choosing the settings. Compression could be it's own thread - maybe a good idea?
:)
Thanks Giles. I hadn't appreciated that! Maybe I don't know as much as I thought about compression!

Great idea for thread on compression - I'd value that. I'll start one and look forward to you weighing in!

cheers, Raymond

Thousand Ways
Posts: 252
Joined: 18 Jun 2015
Contact:

05 Mar 2017

Hello all
I know that this is an old thread now, but I wanted to ask what you think of this Propellerhead tutorial video –
https://www.youtube.com/watch?v=T-TpPLzRpsU
– in relation to the things discussed in the thread. The video was posted online in April 2016, and you might well have already seen it. But in relation to clipping and loudness, it talks about bit depth, which we hadn't really discussed in the thread. The video claims that many of the meter readings don't matter at all – he keeps repeating the rather unhelpful phrase "Don't sweat it" in relation to meters. Throughout this and other Propellerhead tutorial videos, the meters are seen pushing into the red almost constantly. This seems to go against what Selig and others have said in this thread. Any thoughts? Is the information in the video simply wrong?

Thousand Ways
Posts: 252
Joined: 18 Jun 2015
Contact:

06 Mar 2017

To clarify above a bit, I mean that the video seems to imply that the whole idea of leaving headroom is irrelevant.

User avatar
selig
RE Developer
Posts: 11747
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

06 Mar 2017

Thousand Ways wrote:To clarify above a bit, I mean that the video seems to imply that the whole idea of leaving headroom is irrelevant.
What part are you referring to? Maybe I can clarify (I was consulted by Ryan on this particular video, so I'm roughly familiar with it).
Selig Audio, LLC

Thousand Ways
Posts: 252
Joined: 18 Jun 2015
Contact:

06 Mar 2017

Well, at 1:02 we see a main mixer peaking at above 0db.

The section from 3:24 to 4:01 then talks about bit depth, and I'm not sure whether this invalidates some of what was discussed earlier in this thread. If Reason has this "64 bits" capability that he talks about (around 4:11), then why aim to create any headroom? The video then goes on to say "there is no too loud inside Reason's mixer or Reason's rack". Baffled. Then he says that the meters on individual channels of the mixer "aren't even concermed with representing a clipping point". If this is the case, then isn't Propellerhead's use of red LEDs for the upper parts of the meters just misleading?

lowpryo
Posts: 452
Joined: 22 Jan 2015

07 Mar 2017

Thousand Ways wrote:Well, at 1:02 we see a main mixer peaking at above 0db.

The section from 3:24 to 4:01 then talks about bit depth, and I'm not sure whether this invalidates some of what was discussed earlier in this thread. If Reason has this "64 bits" capability that he talks about (around 4:11), then why aim to create any headroom? The video then goes on to say "there is no too loud inside Reason's mixer or Reason's rack". Baffled. Then he says that the meters on individual channels of the mixer "aren't even concermed with representing a clipping point". If this is the case, then isn't Propellerhead's use of red LEDs for the upper parts of the meters just misleading?
the key word is that there's no too loud inside Reason. he's talking about how your signals will not be damaged if they go above 0dBFS at any point in the signal chain, so you don't have to be afraid while you're working inside the software.

however, once your track leaves Reason (by exporting or playing), you will definitely be clipping at 0dB. so that's why you want to make sure the very last signal that leaves reason doesn't clip, and why it's good practice to just stay away entirely.

User avatar
selig
RE Developer
Posts: 11747
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

07 Mar 2017

Thousand Ways wrote:Well, at 1:02 we see a main mixer peaking at above 0db.

The section from 3:24 to 4:01 then talks about bit depth, and I'm not sure whether this invalidates some of what was discussed earlier in this thread. If Reason has this "64 bits" capability that he talks about (around 4:11), then why aim to create any headroom? The video then goes on to say "there is no too loud inside Reason's mixer or Reason's rack". Baffled. Then he says that the meters on individual channels of the mixer "aren't even concermed with representing a clipping point". If this is the case, then isn't Propellerhead's use of red LEDs for the upper parts of the meters just misleading?
Yes, the use of red LEDs is extremely misleading IMO. I feel I've ranted on this and other related issues far too much already…

I believe his point was that you don't clip in Reason. You only clip when leaving Reason, if your levels are above 0 dBFS.

BUT, there are still many reasons to keep levels lower, mostly to do with faster workflow and repeatability rather than strictly for sonic reasons.


Sent from my iPad using Tapatalk
Selig Audio, LLC

househoppin09
Posts: 536
Joined: 03 Aug 2016

08 Mar 2017

Sorry to pile on with yet another question for the always-accommodating selig, but: I've noticed you bring up the issue of nominal levels in nonlinear devices a few times, which does seem to be a bit in tension with the idea of level management not being so much "for sonic reasons". So my question is, realistically speaking, with the built-in Reason devices as well as the major Rack Extensions, would there ever really be much clearly audible difference in results due to hitting a typical compressor or distortion unit at -26 dBFS or +15 dBFS or whatever? I realize it's not advisable for a whole host of reasons, including ensuring that presets will translate perfectly and so on. I'm just wondering how much difference it might really make, if one were to be totally careless and allow signals 10 or 20 dB above or below nominal to run around the rack willy-nilly. And, for that matter, how does one even determine what a given device's nominal level is? Are they all uniformly -12 dBFS, without exception, including third-party REs? If not, is there a way to experimentally determine a device's nominal level? Sorry about the endless litany of questions here, but this seems important and I can't find where it's been fully addressed before.

User avatar
selig
RE Developer
Posts: 11747
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

09 Mar 2017

househoppin09 wrote:Sorry to pile on with yet another question for the always-accommodating selig, but: I've noticed you bring up the issue of nominal levels in nonlinear devices a few times, which does seem to be a bit in tension with the idea of level management not being so much "for sonic reasons". So my question is, realistically speaking, with the built-in Reason devices as well as the major Rack Extensions, would there ever really be much clearly audible difference in results due to hitting a typical compressor or distortion unit at -26 dBFS or +15 dBFS or whatever? I realize it's not advisable for a whole host of reasons, including ensuring that presets will translate perfectly and so on. I'm just wondering how much difference it might really make, if one were to be totally careless and allow signals 10 or 20 dB above or below nominal to run around the rack willy-nilly. And, for that matter, how does one even determine what a given device's nominal level is? Are they all uniformly -12 dBFS, without exception, including third-party REs? If not, is there a way to experimentally determine a device's nominal level? Sorry about the endless litany of questions here, but this seems important and I can't find where it's been fully addressed before.
It's not at all about any sonic issues/differences, it's about getting the desired amount of compression/saturation/etc.

Compressors are easy to explain the concept of nominal level. Let's say you want to compress a kick fairly heavily. Using the MClass compressor as an example (you can try this on your system to see what I mean), let's go a little extreme and say that the kick's peak level is hitting - 40 dBFS. Looking at the Threshold, we see that the lowest setting is - 36dB, which means that there can be no setting of Threshold that will allow you to achieve any compression at all. And that's with the highest ratio and fastest attack. The MClass has an Input Gain control, so we can crank that up giving us about 12 dB gain. This brings the kick up to -28 dBFS, and now we have compression.

Now let's say we want a slower attack and lower ratio, so I set them to their defaults and I'm now seeing only the first LED light up for gain reduction (about 1-2 dB of GR). At this point, if we want MORE compression, we're going to have to increase the level of the signal coming into the compressor! This is because, for the chosen settings, our input level is below the nominal level this compressor is expecting to see to allow it to give us the desired results. In this case we would need to increase the signal level coming into the compressor by at least the amount of gain reduction we want to see (I often adjust compression by ear and by the amount of gain reduction I'm looking for). But since our ration is not Inf:1 (max) and our attack is not the fastest, we will need even MORE level than the amount of gain reduction - in our example, increasing the input by 10 dB only gives us around 4-6 dB gain reduction total.

This may still not be enough gain reduction, so the input may even need to be increased further. In this case, it would have been quicker to have the kick coming into the compressor at a hotter level to being with, which would save you time and allow you to quickly dial up the sound you want and then get on to other issues. IMO, the more time you spend fixing issues during the creative process, the more chance (at least with me) you can get side-tracked. At the least, you'll be using your time for things that COULD have been avoided instead of using your time to make your mix as great as possible.

Now let's look at the other angle, levels that are too hot. To hit that same amount of gain reduction of 4-6 dB with a level of +30 dBFS coming into the compressor, you will need to lower the Input Gain all the way, and raise the Threshold all the way up. Not to mention the fact your signal will now clip the outputs if you don't lower it at some later point by 30 dB or more! In this case, the input is hot enough that you almost cannot get too much gain reduction, especially if you used a faster attack or higher ratio.

So in one case, if the level is too low you can't get enough gain reduction, and in the other if it's too high you can't help but have too much gain reduction. In BOTH cases you end up doing more work "fixing" things so you can get the desired results than if you had started with a signal level within the working range (nominal level) of the device in question.

With audio levels, you have multiple signals coming into the mixer which are added together which must not exceed a certain level (0 dBFS, the clipping point), and as such must be reduced to prevent this. You also have a mix level which must not clip but can obviously be "hotter" than any one individual level. This means that on an individual channel, you're level will be lower than on the mix bus, naturally. And this suggests that when using a device such as a compressor on an individual channel it will expect to see a lower level than when using that same device on an entire mix. Devices must therefore be able to address both situations and still allow you to achieve the desired results. IMO, channels should peak around -12 dBFS for a typical project (projects with fewer channels such as a guitar/vocal can be hotter, projects with many channels may need an even lower level), and the mix should peak around - 3dB. Each device must be able to work well within these ranges, and even further to accommodate any special cases.

So while there is no sonic reason to keep levels around this range, IMO there IS an argument to be made for adopting a common reference level for all audio signals coming into the mixer. Doing so not only addresses assuring you're working in the nominal range of most devices, but it also leaves headroom for mixing etc. The primary advantage is workflow, not sonic integrity - you can spend less time worrying about clipping and adjusting levels to hit compressors/distortion devices etc., and more time "mixing".

It's also important that any processing you add does not adversely affect these levels or what's the point - if you add level when you EQ etc. you need to compensate so as to keep the same level. This can greatly help when doing A/B comparisons so that you are SURE the effect you're adding is actually improving the sound and not just making it louder!
:)
Selig Audio, LLC

Thousand Ways
Posts: 252
Joined: 18 Jun 2015
Contact:

09 Mar 2017

lowpryo, Selig, househoppin09, many thanks as ever for the useful responses.

househoppin09
Posts: 536
Joined: 03 Aug 2016

09 Mar 2017

Thanks again, selig, for that excellent walkthrough of the issue! What you say makes perfect sense of course, and I think I was overcomplicating the issue of "nominal level" and reading too much into it. From reading various previous comments, I had somehow gotten the idea that each nonlinear device had a single, specific "nominal level", and that the further an input signal might deviate from that level, the more problematically "altered" the device's behavior might be in some way that wasn't entirely clear to me. From what you say, it sounds like that was wrong--"nominal level" in this case is really more of a matter of keeping signals within the fairly broad range in which the settings on the device, which by definition can never be level-agnostic for a nonlinear device, will "make sense" and be applicable to the level of signal that device is seeing. In other words, as long as the compressor/distortion unit/etc. is giving the desired results, that in and of itself proves that you're as close to nominal level as you need to be in that particular case. Is that more or less correct? If so, I now understand why it's not so much a sonic performance issue, but simply a "getting the device to actually do what you want it to do" issue--which is what you're saying, right? :)

User avatar
selig
RE Developer
Posts: 11747
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

09 Mar 2017

househoppin09 wrote:Thanks again, selig, for that excellent walkthrough of the issue! What you say makes perfect sense of course, and I think I was overcomplicating the issue of "nominal level" and reading too much into it. From reading various previous comments, I had somehow gotten the idea that each nonlinear device had a single, specific "nominal level", and that the further an input signal might deviate from that level, the more problematically "altered" the device's behavior might be in some way that wasn't entirely clear to me. From what you say, it sounds like that was wrong--"nominal level" in this case is really more of a matter of keeping signals within the fairly broad range in which the settings on the device, which by definition can never be level-agnostic for a nonlinear device, will "make sense" and be applicable to the level of signal that device is seeing. In other words, as long as the compressor/distortion unit/etc. is giving the desired results, that in and of itself proves that you're as close to nominal level as you need to be in that particular case. Is that more or less correct? If so, I now understand why it's not so much a sonic performance issue, but simply a "getting the device to actually do what you want it to do" issue--which is what you're saying, right? :)
Yes, you got it!


Sent from my iPad using Tapatalk
Selig Audio, LLC

MitchClark89
Posts: 110
Joined: 15 Jul 2016

19 Mar 2017

hi everyone! just bringing this very good thread back to life to ask if there is any point/benefits to attaching a gain meter (in this case, selig gain) on every channel of a kong that has its own audio track ie a gain for the kick track a gain for the snare a gain for the hi hats and so on and so on? i have been doing this but it does take a lot of work to get all the samples at -12peak before going in to the bus with compressor and eq.

thanks!

MC

Thousand Ways
Posts: 252
Joined: 18 Jun 2015
Contact:

04 Aug 2023

I hope I won't annoy people by adding a query to this old thread, but here goes:

I'm working in Reason 12 on a track that has 30 channels. Each channel is individually gain staged to -12dB using Peak mode on Reason's Big Meter, and with its individual volume set after that. My understanding had been that, when the channels are playing together, the result would be a signal that would peak substantially higher than -12dB for much of the time. Instead, what I've found is that the track still – with all channels unmuted – consistently peaks at around -12dB, with the very occasional leap to around 0dB when there's a loud hit. Admittedly many of the channels sound only in certain parts of the track, and some channels are just percussion sounds and so on, but I'm still struggling to understand how the whole thing remains at only -12dB for much of the time when it's played back. (I should mention that this is with the Master Section bypassed, and with the Master Compressor switched off.)

Is the scenario I've described here problematic, ie. is it okay for a track to be peaking at only around -12dB for much of the time before enabling the Master Compressor and mastering combinator in the Master Section, or should the track be a lot louder than this before it reaches those devices?

Any thoughts appreciated.

User avatar
selig
RE Developer
Posts: 11747
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

10 Aug 2023

Thousand Ways wrote:
04 Aug 2023
I hope I won't annoy people by adding a query to this old thread, but here goes:

I'm working in Reason 12 on a track that has 30 channels. Each channel is individually gain staged to -12dB using Peak mode on Reason's Big Meter, and with its individual volume set after that. My understanding had been that, when the channels are playing together, the result would be a signal that would peak substantially higher than -12dB for much of the time. Instead, what I've found is that the track still – with all channels unmuted – consistently peaks at around -12dB, with the very occasional leap to around 0dB when there's a loud hit. Admittedly many of the channels sound only in certain parts of the track, and some channels are just percussion sounds and so on, but I'm still struggling to understand how the whole thing remains at only -12dB for much of the time when it's played back. (I should mention that this is with the Master Section bypassed, and with the Master Compressor switched off.)

Is the scenario I've described here problematic, ie. is it okay for a track to be peaking at only around -12dB for much of the time before enabling the Master Compressor and mastering combinator in the Master Section, or should the track be a lot louder than this before it reaches those devices?

Any thoughts appreciated.
First thought - if the levels jump to 0 dB even once, I either lower the overall level or try to address the track causing the peak.

And if it jumps to just under 0 dBFS but doesn't clip, you're damn near as perfect a mix level as possible, although I shoot for highest peaks around 3 dBFS give or take a dB because every playback isn't exactly the same (reverbs, chorus, etc can be totally random).

One reason a bunch of tracks don't add much level is they don't actually play at the same exact time, either by a few millisecond or some parts playing down beats while others play up beats etc. Another reason is all faders won't sit at unity (0 dB) for a typical mix, a few key elements (kick/bass/vocal for sure) will stay there but the rest of the mix will likely be balanced downwards from there.

Finally, even with the same exact signal you only get a MAXIMUM 6 dB increase for each doubling of tracks. That's +6 dB going from 1 track to 2, or from 32 to 64 tracks in a mix! So the more tracks you add the less each individual track affects the total. BUT, tracks are NEVER exactly the same, and what DOES happen is the tracks in a typical mix average out to adding around 3 dB for each doubling of tracks, assuming all tracks at full level and all tracks playing all the time and hitting on the same beat precisely (this is where the pan law of - 3dB comes from btw). Finally, add in the number of faders being lowered or even muted, and the number of channels that actually play at the same time and you lower that number even further.

In fact, most everything you do in a typical mix will tend to decrease the average total peak level vs track count, from lowering faders, muting tracks, tracks playing against each other, automation, even panning can all push the level down from the theoretical levels that "should" be happening when summing multiple channels of audio information. But the minute you start boosting a track above -12 with EQ or compression, all bets are off (and it makes A/B comparisons tricky as well). So try to keep those levels consistent up to the fader. :)
Selig Audio, LLC

Thousand Ways
Posts: 252
Joined: 18 Jun 2015
Contact:

10 Aug 2023

Selig, many thanks for this.
selig wrote:
10 Aug 2023
First thought - if the levels jump to 0 dB even once, I either lower the overall level or try to address the track causing the peak.
With the last track I made, I did the latter: find what's causing the peak, see if those sounds can be reduced in volume or slightly compressed. Seemed to work, although I wasn't sure whether it was the "proper" method.
selig wrote:
10 Aug 2023
I shoot for highest peaks around 3 dBFS give or take a dB because every playback isn't exactly the same
Sorry, do you mean -3dBFS, or 3dBFS? Isn't the latter over the maximum peak level that you're aiming for?
selig wrote:
10 Aug 2023
So the more tracks you add the less each individual track affects the total.
I don't at all doubt what you're saying, but this is mathematically mindbending. I'm thinking of your analogy of pouring liquids into a cup, and that at some point the volume of those combined liquids adds up to something that spills over the edge. But then, as you say, each of those portions of liquid is present only some of the time. :lightbulb:
selig wrote:
10 Aug 2023
But the minute you start boosting a track above -12 with EQ or compression, all bets are off (and it makes A/B comparisons tricky as well). So try to keep those levels consistent up to the fader. :)
:thumbs_up:

User avatar
tomusurp
Posts: 296
Joined: 30 Jan 2022
Location: USA
Contact:

10 Aug 2023

Thousand Ways wrote:
04 Aug 2023
I hope I won't annoy people by adding a query to this old thread, but here goes:

I'm working in Reason 12 on a track that has 30 channels. Each channel is individually gain staged to -12dB using Peak mode on Reason's Big Meter, and with its individual volume set after that. My understanding had been that, when the channels are playing together, the result would be a signal that would peak substantially higher than -12dB for much of the time. Instead, what I've found is that the track still – with all channels unmuted – consistently peaks at around -12dB, with the very occasional leap to around 0dB when there's a loud hit. Admittedly many of the channels sound only in certain parts of the track, and some channels are just percussion sounds and so on, but I'm still struggling to understand how the whole thing remains at only -12dB for much of the time when it's played back. (I should mention that this is with the Master Section bypassed, and with the Master Compressor switched off.)

Is the scenario I've described here problematic, ie. is it okay for a track to be peaking at only around -12dB for much of the time before enabling the Master Compressor and mastering combinator in the Master Section, or should the track be a lot louder than this before it reaches those devices?

Any thoughts appreciated.
So you are first adjusting the gain knob in each track to -12db on SSL mixer correct? Question is why? I make and master my own music but I only use the gain knobs on certain tracks that actually need gain, otherwise we'll be using the volume faders anyway. Also before I send the mixbus into my masterbus I personally only like the mixbus to peak at around -4 to -3 in peak mode. No more no less usually. This is strictly because I'm leaving some room for saturation, which is one of the main elements in loudness mastering.
"The hottest in the matrix"
My music:
https://www.youtube.com/@usurptom
https://www.usurptom.com


:reason: :re: :refill: :rt:

robussc
Posts: 493
Joined: 03 May 2022

10 Aug 2023

tomusurp wrote:
10 Aug 2023
So you are first adjusting the gain knob in each track to -12db on SSL mixer correct? Question is why? I make and master my own music but I only use the gain knobs on certain tracks that actually need gain, otherwise we'll be using the volume faders anyway. Also before I send the mixbus into my masterbus I personally only like the mixbus to peak at around -4 to -3 in peak mode. No more no less usually. This is strictly because I'm leaving some room for saturation, which is one of the main elements in loudness mastering.
The general goal is to get all the tracks peaking at roughly the same level before any mixing begins. This can be done a number of ways: adjusting the instrument volume, tweaking the Input trim, adjusting the clip level depending on what you're working with.

The main faders are there to balance the mix, not to set the base level. This also ensures that the levels hitting the insert effects and EQ etc are at their optimum levels.
Software: Reason 12 + Objekt, Vintage Vault 4, V-Collection 9 + Pigments, Vintage Verb + Supermassive
Hardware: M1 Mac mini + dual monitors, Launchkey 61, Scarlett 18i20, Rokit 6 monitors, AT4040 mic, DT-990 Pro phones

Post Reply
  • Information
  • Who is online

    Users browsing this forum: No registered users and 97 guests