Gain staging and Reason: do you do it?

Have an urge to learn, or a calling to teach? Want to share some useful Youtube videos? Do it here!

Gain staging and Reason: do you do it?

Yes - I do gain staging in Reason
60
79%
No - I do not gain stage in Reason
16
21%
 
Total votes: 76
User avatar
selig
RE Developer
Posts: 11685
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

16 Sep 2018

S1GNL wrote:
16 Sep 2018
What I'm talking about is convenience...
I do user faders and listen to the change without looking at numbers. But at a later stage I just boost/cut 0.3-0.6 dB because that's the smallest range which might make a difference. Besides that I don't trust my ears always, so moving the fader might create an "illusion". So, I do this: I change the volume +/- 0.3 dB without listening to the mix. Then I play the loop (or the whole song). If doesn't sit well enough, I stop the playback, adjust the fader and play the section again. That's what works best for me.
I understand, and agree that's a good way to work when you know the mix isn't finished but you're not 100% sure what needs to be done.

My point, is more about the comment suggesting that the lower the fader goes, the harder it is to make small changes. While technically true, it's important to note that the issue you describe only happens at the very bottom of the fader. Which means that it's not something that needs to be addressed by gain staging unless you're starting with a signal peaking below -60dBFS, which would be unlikely if only because in most studios it would be difficult to even hear such a low signal with typical monitoring levels!

I made the post here because this issue keeps coming up again and again, where people will often suggest you need to keep the fader up near zero because when you lower it you loose resolution. What they don't say is the fader will be almost at the bottom before this can possibly happen.

This is one of those "myths" I felt I should address in a place where many folks would see it.

I did not mean to appear to be picking you YOU, so apologies if my post came across that way!
Selig Audio, LLC

S1GNL
Posts: 83
Joined: 31 Jan 2018

17 Sep 2018

selig wrote:
16 Sep 2018
S1GNL wrote:
16 Sep 2018
What I'm talking about is convenience...
I do user faders and listen to the change without looking at numbers. But at a later stage I just boost/cut 0.3-0.6 dB because that's the smallest range which might make a difference. Besides that I don't trust my ears always, so moving the fader might create an "illusion". So, I do this: I change the volume +/- 0.3 dB without listening to the mix. Then I play the loop (or the whole song). If doesn't sit well enough, I stop the playback, adjust the fader and play the section again. That's what works best for me.
I understand, and agree that's a good way to work when you know the mix isn't finished but you're not 100% sure what needs to be done.

My point, is more about the comment suggesting that the lower the fader goes, the harder it is to make small changes. While technically true, it's important to note that the issue you describe only happens at the very bottom of the fader. Which means that it's not something that needs to be addressed by gain staging unless you're starting with a signal peaking below -60dBFS, which would be unlikely if only because in most studios it would be difficult to even hear such a low signal with typical monitoring levels!

I made the post here because this issue keeps coming up again and again, where people will often suggest you need to keep the fader up near zero because when you lower it you loose resolution. What they don't say is the fader will be almost at the bottom before this can possibly happen.

This is one of those "myths" I felt I should address in a place where many folks would see it.

I did not mean to appear to be picking you YOU, so apologies if my post came across that way!
Oh, no I didn't feel offended or something at all! I think your contribution to this forum is always high quality. You should consider to write a book about working with Reason. Seriously.

Yeah, the myth of "warm/hot mixing" should be banned for good. There are some good reasons for "micro" gain staging between plugins but that's really it. Finally, Props added the multi-fader adjustment feature, so there really won't be any reason anymore for low fader positions.

jlgrimes
Posts: 661
Joined: 06 Jun 2017

17 Sep 2018

Not really as I don't really mix in Reason. I usually export track into different DAW and within that DAW I generally do try to gainstage.


That said I do try to avoid clipping but I usually just go right for the master fader if I'm clipping. For creating tracks/composing, gainstaging is the last thing I'm worried about.

jlgrimes
Posts: 661
Joined: 06 Jun 2017

17 Sep 2018

pjeudy wrote:
01 May 2017
QVprod wrote: I fail to see the benefit of turning the Control Room level down instead of simply turning down your actual interface,
The benefit is that you can use the control room knob without having to reach the interface. It saves you reaching a few meters or how ever close/far you interface is. I used it like that and loved it.

Yea I remember the controversy with these videos on the old Propellerheads forum :o :shock:
Yeah the Ctrl Room out always threw my head for a loop.

I always thought its main use was to have a dedicated mix out that the Control Room typically monitors, so you can make custom rough mixes (using the send knobs) for performers who record with headphones and needs a customized "performance" mix different from what the engineer wants to hear; Some performers might want certain sounds muted or certain sounds louder than what it should be for mixing so they can get a better vibe. The Ctrl Room out helps in this process as you can have a separate mix from the headphone mix.


I think Reason allows you to temporarily toggle between various "headphone" mixes vs main mixes to make it a bit easier to create headphone mixes using your control room monitors, which I think is its main benefit.

Its a common feature on most analog consoles that hasn't got carried over to most DAWs with the exeception of Reason. That said most DAWs have a certain way to achieve this but it is usually a bit harder.



That said I never use Control Room outs as most artists rarely give me weird requests but I always think about actually trying it to see how much better it is.
Last edited by jlgrimes on 17 Sep 2018, edited 1 time in total.

User avatar
QVprod
Moderator
Posts: 3488
Joined: 15 Jan 2015
Contact:

17 Sep 2018

Just to add on. It's not necessarily necessary, but I find mixing to be a lot easier when I do. That goes for any DAW (I don't typically mix full songs in Reason). It's tedious, but worth it. I tend to gain stage as I create in Reason being that patches are often incredibly loud.

User avatar
QVprod
Moderator
Posts: 3488
Joined: 15 Jan 2015
Contact:

17 Sep 2018

jlgrimes wrote:
17 Sep 2018
pjeudy wrote:
01 May 2017

The benefit is that you can use the control room knob without having to reach the interface. It saves you reaching a few meters or how ever close/far you interface is. I used it like that and loved it.

Yea I remember the controversy with these videos on the old Propellerheads forum :o :shock:
Yeah the Ctrl Room out always threw my head for a loop.

I always thought its main use was to have a dedicated mix out that the Control Room typically monitors, so you can make custom rough mixes (using the send knobs) for performers who record with headphones and needs a customized "performance" mix different from what the engineer wants to hear.
That's how I feel about that knob, as my interface is usually pretty close in reach for me. I'd rather not use a mouse to control my listening volume, but I can understand for those who have their interface placed elsewhere with no hardware control near them.

User avatar
selig
RE Developer
Posts: 11685
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

17 Sep 2018

Just to clarify the terms here, "gain staging" means to set each gain to the nominal level for that stage.
https://en.wikipedia.org/wiki/Gain_stage
The goal with gain staging is to prevent distortion or adding noise, which while an important concept in analog systems is all but nonexistent in digital systems.

Analog Dynamic Range
https://en.wikipedia.org/wiki/Dynamic_range
In the analog domain, you have limited dynamic range and can therefore clip each gain stage in a console or recording chain if signals get too hot, or you can add noise if signals are too low and need to be amplified later in the chain. So you "gain stage" to prevent this, meaning, you set levels at every gain stage to their nominal level, which typically represents a fairly specific level.
https://en.wikipedia.org/wiki/Nominal_level
Analog systems restrict your available dynamic range to the point where levels really do matter a lot, for the above reasons.

Digital Dynamic Range
In the digital domain, unlike with analog gear, you have a theoretical 1500 dB dynamic range (assuming 32 bit floating point audio, as in Reason). This basically means the nominal level covers such a wide range as to be virtually non-existant or irrelevant. It also means you can theoretically choose any nominal level you want - but there are some restrictions…

Headroom
There are still a few places in a digital audio system where levels DO matter. One is the main output, where digital signals are converted to analog, and where you CAN clip. Luckily, to "gain stage" the main output, all you need to do is keep levels below clipping, leaving some amount of headroom depending on what you're doing. If you're recording raw audio, you keep levels around 10-12 dB below clipping (which leaves room for unexpected bursts of energy from performers, and leaves headroom in your mix for summing multiple audio channels).
If you are mixing you can hit anywhere from 1-6 dB headroom (depending on who you ask!). When mastering, folks still leave anywhere from 0.1dB to 1dB (or more) headroom, again depending on who you ask and depending on the medium you are mastering for.

Nonlinear Processing in Digital Audio
Another place where levels matter is any nonlinear processor, but even then you have a WIDE range of acceptable levels. For example, on the Master Compressor if you had an extremely low signal level that never peaked above -30dBFS, you would not be able to get ANY compression (gain reduction) at any setting. Or, in the other extreme if you had a level that stayed well above 0dBFS you couldn't avoid having excessive compression at any setting. In both cases, the levels themselves are perfectly "legal" for the floating point digital audio system, but they would fall outside of the nominal level of that compressor, and you would find it difficult if not impossible to set the compressor to give a useful result.
Similarly, with the MClass compressor, between the lower threshold of -36dB and the input gain of 12dB, you would have to have a signal never exceed -48dBFS before you would be unable to get any compression.

In my experience, it's rare to have levels so low or so high as to exceed the nominal level range of any digital device. And if you are practicing utilizing a consistent peak reference level for all signals, you'll never even have to think about the nominal level of any device or gain stage.

And the same can be said about the concept of gain staging, since every gain stage in Reason has 1500 dB available dynamic range!

This is why I don't use the old analog term "gain staging", and instead speak about adopting a consistent peak reference level for all signals. Folks are probably tired of hearing me blab on about this subject, but it keeps coming up again and again and IMO is worth repeating for those who have not had a chance to hear it!

OK, I'll shut up now, just needed to get a little tech talk out of my system I guess… ;)
Selig Audio, LLC

provinceofnowhere
Posts: 53
Joined: 11 Apr 2018

19 Sep 2018

EnochLight wrote:
15 Sep 2018
provinceofnowhere wrote:
15 Sep 2018
Yes, it was a revelation when I figured this out. But... why is the gain knob right at the top of the channel?? So annoying having to keep scrolling between gain knob and meter when gainstaging.

(am I doing it wrong?)
Thread resurrection - cool! :)

I wouldn’t say you’re doing it wrong - the gain knob is where it’s at because that’s where it’s found in its hardware counterpart (the SSL mixer series that Reason’s main mixer is based on).

But as EdGrip pointed out over a year ago, gain staging may be unnecessary in Reason (though still a good practice for some):

cheers.

Interesting thread.

User avatar
deeplink
Competition Winner
Posts: 1073
Joined: 08 Jul 2020
Location: Dubai / Cape Town
Contact:

14 Jul 2020

Another thread resurrection... It's still unclear to me why there is a narrative that states, "Leave Xdb head room when sending to a mastering engineer". - Most say 6db.

Why is this? If my final mix doesn't reach over 0db, and there is no digital distortion.. why cant the mastering engineering just turn all the tracks down by 6db when mastering it?
Get more Combinators at the deeplink website

User avatar
EdwardKiy
Posts: 760
Joined: 02 Oct 2019

14 Jul 2020

it's only a strict rule if you're recording from or through analog gear, where you are dependent on your hardware's ability as a DAC (or ADC in this case) and how it handles overdrive. If you're doing it all 100% digital, all you need to know is to not peak at 0. Peaking at 0 will result in irreversible (for the mix engineer) damage, which you may not necessarily hear in your current mix. But there's a BUT - if you're not hitting the 0 only because you applied a limiter, to the engineer it's almost as good as if you had it peaked. Applying any kind of processing on top of a sound that's been cut by a limiter will likely produce artifacts.

Recap: peaking (getting audio clipped) and limiting are a "no-no", but other than that, the " -6 dB rule" has only historical value.

User avatar
selig
RE Developer
Posts: 11685
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

14 Jul 2020

deeplink wrote:
14 Jul 2020
Another thread resurrection... It's still unclear to me why there is a narrative that states, "Leave Xdb head room when sending to a mastering engineer". - Most say 6db.

Why is this? If my final mix doesn't reach over 0db, and there is no digital distortion.. why cant the mastering engineering just turn all the tracks down by 6db when mastering it?
Mastering engineers DO know this, I'm 100% certain.

My guess is that no matter what they said about "no clipping", they still received tracks with clipping ("but it only clips a little bit"!). As with any "specification", there was a time when there was no spec and things didn't go well, so the "rule" was added to the spec to prevent whatever issue was occurring. Every line in a contract comes from some case where someone found a way around the existing "rules", so they had to add another one to cover those cases. ;)

In the end, still guessing here, it took telling folks to leave 6 dB before no clipping was delivered. Having worked with audio folks at all levels for over 40 years, and been at the receiving end of many mix and compilation projects over those same years, I have learned that no matter how specific you are there is ALWAYS someone who doesn't follow the specifications. Your only option is to draw any "line" as far as it takes to avoid the issue in the future!
Selig Audio, LLC

avasopht
Competition Winner
Posts: 3931
Joined: 16 Jan 2015

14 Jul 2020

Well, ....
(Gearslutz) Nordenstam wrote:I believe the worst case scenario is a sequence of 1010101101010101 - notice the flip in the middle.

It can look like this:
Image

Depending on steepness of the reconstruction filter, the IS peak can exceed +10dBFS..

kinkujin
Posts: 206
Joined: 01 Mar 2018

20 Jul 2020

Newbie question ...
One thing I don’t see in this discussion is what is the rule of thumb about the first stage in the process, the initial instrument gain setting. Where should that be? As hot as possible? Whatever sounds good?

User avatar
aeox
Competition Winner
Posts: 3222
Joined: 23 Feb 2017
Location: Oregon

20 Jul 2020

kinkujin wrote:
20 Jul 2020
Whatever sounds good?
Yep that's the one!

PhillipOrdonez
Posts: 3732
Joined: 20 Oct 2017
Location: Norway
Contact:

20 Jul 2020

kinkujin wrote:
20 Jul 2020
Newbie question ...
One thing I don’t see in this discussion is what is the rule of thumb about the first stage in the process, the initial instrument gain setting. Where should that be? As hot as possible? Whatever sounds good?
Whatever sounds good relative to the other elements which should already be at a level that had been thought out. In my case, I set the kick at a desired level and everything is set around it.

User avatar
selig
RE Developer
Posts: 11685
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

20 Jul 2020

kinkujin wrote:
20 Jul 2020
Newbie question ...
One thing I don’t see in this discussion is what is the rule of thumb about the first stage in the process, the initial instrument gain setting. Where should that be? As hot as possible? Whatever sounds good?
OMG, go back and read my posts in this thread, where I actually eventually apologize about always going on about setting levels consistently from the start.
My approach is start with the same level you end up with, from instrument or audio input level to channel fader. There is no reason to have levels go up or down at any "stage". Many reasons to keep levels consistent from start to finish (repeating myself, I'm sure), including ensuring fair A/B comparisons when adding processing, no level jumps if you decide to delete/bypass any processor later on in the production, and for all non-linear processors (dynamics/saturation etc) you already know how to set them based on level because you already know the level coming into them!

Happy to speak on this subject any time, if that's not already painfully obvious… ;)
Selig Audio, LLC

kinkujin
Posts: 206
Joined: 01 Mar 2018

21 Jul 2020

selig wrote:
20 Jul 2020
kinkujin wrote:
20 Jul 2020
Newbie question ...
One thing I don’t see in this discussion is what is the rule of thumb about the first stage in the process, the initial instrument gain setting. Where should that be? As hot as possible? Whatever sounds good?
OMG, go back and read my posts in this thread, where I actually eventually apologize about always going on about setting levels consistently from the start.
My approach is start with the same level you end up with, from instrument or audio input level to channel fader. There is no reason to have levels go up or down at any "stage". Many reasons to keep levels consistent from start to finish (repeating myself, I'm sure), including ensuring fair A/B comparisons when adding processing, no level jumps if you decide to delete/bypass any processor later on in the production, and for all non-linear processors (dynamics/saturation etc) you already know how to set them based on level because you already know the level coming into them!

Happy to speak on this subject any time, if that's not already painfully obvious… ;)
Guess I'd better read and reread more carefully. Thanks Selig!

Post Reply
  • Information
  • Who is online

    Users browsing this forum: No registered users and 14 guests