Some questions on gain staging

Have an urge to learn, or a calling to teach? Want to share some useful Youtube videos? Do it here!
Nielsen
Posts: 100
Joined: 05 Nov 2017
Location: Denmark

09 May 2018

Ok, I have been ignoring the finer nuances of this concept for too long now. I hope some of you can help shedding light on a few things. It's not like my mixes are clipping left and right but I still need to understand some things.


- Where is it best to place a gain control plugin for every instrument mixer channel? The "to device" Insert FX slot on every mixer channel in the rack?

- Producers often talk about setting their static levels at -10dB, -12dB or -18dB. Sometimes I also see people referring to these levels as dBFS, but is there a functional difference between -12dB and -12dBFS when looking at the peak meter in a DAW?

- Is there a difference between adjusting instrument volume knobs and input gain knobs external to the instruments when setting the static level? If so, how to know when it's better to use one volume knob over the other?

- Some sounds don't always have static volume levels when holding a key or playing with varying velocity levels. Things like sweeps, release time, velocities, etc. can easily alter the volume over the duration of a sequencer bar. From the perspective of gain staging, should the static volume level be an average of these slight differences in volume or be based approximately around the loudest peak?

- Filter frequency or resonance knobs are commonly automated for adding builds and fades, but this can also alter the volume level being measured. How should the static level for gain staging purposes tackle such variances?

- Should gain staging compensate for volume added or removed during dynamics processing? For example, compression may add some volume and equalization may cut some volume. Does that require subsequent readjustment of the channel's gain plugin to compensate for the increase or loss in volume? Or is that why we want to have some headroom above our default static level, meaning that any volume changes happening during dynamics processing shouldn't be compensated for?


I've watched some videos and read some articles on this concept, which are good for basic understanding, but I always end up having the questions listed above. Thanks in advance and please keep it beginner friendly. :D

User avatar
selig
RE Developer
Posts: 11681
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

09 May 2018

Nielsen wrote:Ok, I have been ignoring the finer nuances of this concept for too long now. I hope some of you can help shedding light on a few things. It's not like my mixes are clipping left and right but I still need to understand some things.

- Where is it best to place a gain control plugin for every instrument mixer channel? The "to device" Insert FX slot on every mixer channel in the rack?
Depends on what you’re doing. If you start with the level you want, and compensate when adding/subtracting gain at any one stage, you won’t need to adjust gain at the insert. I suggest setting levels from the earliest point that makes sense.
Nielsen wrote:- Producers often talk about setting their static levels at -10dB, -12dB or -18dB. Sometimes I also see people referring to these levels as dBFS, but is there a functional difference between -12dB and -12dBFS when looking at the peak meter in a DAW?
Decibels are relative, or ratios.
“-12 dB” means only “12 dB lower”, without saying “lower than what?”. OTOH, “-12 dBFS” says EXACTLY what level: “12 dB below Full Scale (Clipping)”.
That is why “FS” is used, to avoid any confusion!

The peak meters read in “decibels below full scale”, so on the meter it’s assumed to reference Full Scale/clipping (0 dBFS).
Nielsen wrote:- Is there a difference between adjusting instrument volume knobs and input gain knobs external to the instruments when setting the static level? If so, how do I know when it's better to use one volume knob over the other?
IMO it’s better to hit your target level from the start, if only for consistency and simplicity. The only time it would functionally matter is if you used any dynamics/saturation etc. between the instrument and the Mix Channel, in which case you would only need to use Input Gain if you failed to compensate (or were unable to do so) for any gain changes created when adding the FX.
Nielsen wrote:- Some sounds don't always have static volume levels when holding a key or playing with varying velocity levels. Things like sweeps, release time, velocities, etc. can easily alter the volume over the duration of a sequencer bar. From the perspective of gain staging, should the static volume level be an average of these slight differences in volume or based on the loudest peak?
One primary goal with setting levels to a consistent reference is to avoid clipping the mix. With that in mind, you want to know the HIGHEST peak your track hits, even if it only hits that level once in the entire song. This is because it is THAT peak that would be the first to clip the master output if your overall levels were too high.
Nielsen wrote:- Filter frequency or resonance knobs are commonly automated for adding builds and fads, but this can also alter the volume level being measured. How should the static level for gain staging purpose tackles such variances?
See above - the highest peak is the point to measure.
Nielsen wrote:- Should gain staging compensate for volume added or removed during dynamics processing? For example, compression may add some volume and equalization may cut some volume. Does that require subsequent readjustment of the gain plugin to compensate for the increase or loss in volume? Or is that why we want to have some headroom above our default static level, meaning that any volume changes happening during dynamics processing shouldn't be compensated for?
YES YES YES. This can be very helpful and is the second major reason to adopt a peak reference level. For one, it makes it easy to A/B any added effect (you ARE comparing before and after when adding any processing, right?). If the level changes a lot when bypassing an effect, you don’t really know what it’s adding (or not adding), since we can be fooled into thinking even a simple level change sounds “better” (if it’s louder). Also, if you decide to bail on an added FX, you can deleted it and your levels won’t change - make sense?
Nielsen wrote: I've watched some videos and read some articles on this concept, which are good for basic understanding, but I always end up having the questions listed above. Thanks in advance and please keep it beginner friendly. :D
There are others here that may clarify or correct what I’ve written above, so don’t hesitate to ask further questions if my answers don’t make sense to you!
There are many paths to the top of the mountain…



Sent from some crappy device using Tapatalk
Selig Audio, LLC

User avatar
Timmy Crowne
Competition Winner
Posts: 357
Joined: 06 Apr 2017
Location: California, United States

09 May 2018

Selig’s explanation is spot-on.

I would only like to add that it’s great you’re asking about this stuff and it’s a good reminder for me. Understanding gain-staging principles is probably one of the best ways ensure consistency across projects. When we don’t pay attention to gain, it’s easy to lose perspective and get frustrated that our new track doesn’t sound as good as our last one! The music could be solid but if levels are all over the place, especially with dynamic devices, we might make simple mistakes that compromise the mix.

Nielsen
Posts: 100
Joined: 05 Nov 2017
Location: Denmark

10 May 2018

selig wrote:
09 May 2018
Depends on what you’re doing. If you start with the level you want, and compensate when adding/subtracting gain at any one stage, you won’t need to adjust gain at the insert. I suggest setting levels from the earliest point that makes sense.
How would I want to connect a Selig Gain plugin in my rack merely for volume monitoring purposes? So far I have used the Insert FX slot found on every mixer channel and occasionally used the volume fader on the gain plugin to set my static levels. I just want to be sure that I'm not occupying an otherwise useful spot in the signal chain.
selig wrote:
09 May 2018
Decibels are relative, or ratios.
“-12 dB” means only “12 dB lower”, without saying “lower than what?”. OTOH, “-12 dBFS” says EXACTLY what level: “12 dB below Full Scale (Clipping)”.
That is why “FS” is used, to avoid any confusion!

The peak meters read in “decibels below full scale”, so on the meter it’s assumed to reference Full Scale/clipping (0 dBFS).
Ok, thanks.
selig wrote:
09 May 2018
IMO it’s better to hit your target level from the start, if only for consistency and simplicity. The only time it would functionally matter is if you used any dynamics/saturation etc. between the instrument and the Mix Channel, in which case you would only need to use Input Gain if you failed to compensate (or were unable to do so) for any gain changes created when adding the FX.
So adjusting volume on the instrument or on the gain plugin makes no real difference unless there's some effect making it necessary to compensate with a gain plugin? However, setting the static volume on the instrument volume knob is preferable for consistency and simplicity. Please correct me in case I misunderstood.
selig wrote:
09 May 2018
One primary goal with setting levels to a consistent reference is to avoid clipping the mix. With that in mind, you want to know the HIGHEST peak your track hits, even if it only hits that level once in the entire song. This is because it is THAT peak that would be the first to clip the master output if your overall levels were too high.
selig wrote:
09 May 2018
See above - the highest peak is the point to measure.
Ok, got it.
selig wrote:
09 May 2018
YES YES YES. This can be very helpful and is the second major reason to adopt a peak reference level. For one, it makes it easy to A/B any added effect (you ARE comparing before and after when adding any processing, right?). If the level changes a lot when bypassing an effect, you don’t really know what it’s adding (or not adding), since we can be fooled into thinking even a simple level change sounds “better” (if it’s louder). Also, if you decide to bail on an added FX, you can deleted it and your levels won’t change - make sense?
Good point, but how would levels not change when bypassing an effect?

My real reason for asking simply is that I want to make sure that I am not compromising my dynamics processing by readjusting volumes back to the static level after performing compression, equalization, etc.
Timmy Crowne wrote:
09 May 2018
I would only like to add that it’s great you’re asking about this stuff and it’s a good reminder for me. Understanding gain-staging principles is probably one of the best ways ensure consistency across projects. When we don’t pay attention to gain, it’s easy to lose perspective and get frustrated that our new track doesn’t sound as good as our last one! The music could be solid but if levels are all over the place, especially with dynamic devices, we might make simple mistakes that compromise the mix.
No problem, and you more or less summarize the concerns leading me to ask these questions.

User avatar
selig
RE Developer
Posts: 11681
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

10 May 2018

Nielsen wrote:
10 May 2018
How would I want to connect a Selig Gain plugin in my rack merely for volume monitoring purposes? So far I have used the Insert FX slot found on every mixer channel and occasionally used the volume fader on the gain plugin to set my static levels. I just want to be sure that I'm not occupying an otherwise useful spot in the signal chain.
I use Selig Gain to measure peak output level of instruments, and also to compensate for any added/subtracted gain from devices with no level controls (like Saturation Knob, for example). If you don't otherwise add or subtract gain with any added device in your signal flow, you won't need to measure at any other point. I often drag a single Gain around to "spot-measure" at various places if I need to check levels.
Nielsen wrote:
10 May 2018
So adjusting volume on the instrument or on the gain plugin makes no real difference unless there's some effect making it necessary to compensate with a gain plugin? However, setting the static volume on the instrument volume knob is preferable for consistency and simplicity. Please correct me in case I misunderstood.
Not exactly. Example: if you patch a drum machine into a compressor, carefully setup that compressor, then change the output level of the drum machine, you will change the way the compressor is reacting to the drum machine thus changing the amount of compression.
You will THEN need to go BACK to the compressor to adjust the input or threshold to correct for the level change. Same for any device that reacts differently to input levels, such as a gate/expander, saturation, or distortion device (or any device that includes this effect).

Small level changes may not affect the results, but IMO it's better to set levels at the source and then IF necessary to make later adjustments at the END of your signal path. Probably the best place other than the fader to make final adjustments is in the channel insert IF it is in the default (post EQ/Dynamics) position, and IF your gain device is the LAST in the chain. Otherwise, just use the fader since that will be "post everything" in the instrument signal path.

Sometimes you cannot avoid changing levels "upstream" from a compressor, such as adjusting a kick drum level that feeds a master drum bus with a compressor on it. In these cases you are armed with the knowledge that your changes MAY affect the total amount of compression, and to check the compressor settings to make sure they are still doing what you expect.

Nielsen wrote:
10 May 2018
Good point, but how would levels not change when bypassing an effect?

My real reason for asking simply is that I want to make sure that I am not compromising my dynamics processing by readjusting volumes back to the static level after performing compression, equalization, etc.
We are talking peak levels here - remember when we speak of measuring levels we MUST specify the type of metering being used. You cannot simply say "the snare was -12 dBFS" because we don't know if you mean peak, VU, RMS, PPM, or some other type of metering.

With regard to peak levels, there is ONE and only one value that represents the maximum peak achieved during playback for any point you measure. If you measure a peak level of -12 dBFS then make an adjustment that changes the level to -6 dBFS, you simply subtract 6 dB to return the peak level to - 12 dBFS.

The reason I use peak levels in this case is because of the goal of keeping the final output from clipping. It's the peak levels that clip first (obvious, when you think about it), so it's the peak levels that matter when trying to keep track of your overall output/mix level. If having a discussion of perceived loudness (a different subject completely), we would use other types of metering and have different concerns.

I have found that if you keep all individual channels peaking around -12 dBFS, your mix will tend to peak around -6 to -3 dBFS (which is the recommended level according to many mastering engineers). When mixing, some faders will stay up at unity (0 dB) but more will come down, with the end result of a mix that won't clip or even come close in most cases. The advantage to this approach is I can focus more on mixing and not on chasing faders to prevent clipping the output. Other advantages include keeping all levels around the point expected by devices that are level dependent (non-linear with regards to level, such as dynamics/saturation/distortion), which makes setting these device easier/quicker because they always see around the same level. Also, if you keep all peaks around -12 dBFS on all channels, you can quickly see whether any added FX is adding/subtracting gain or not!

The main goal is to adopt a system that allows you to work quickly and get expected results, thus minimizing unexpected detours and "mix-fixing" side trips.

In other words: "More mixing and less fixing".
:)
Selig Audio, LLC

User avatar
nooomy
Posts: 543
Joined: 16 Jan 2015

10 May 2018

Nielsen wrote:
09 May 2018
Ok, I have been ignoring the finer nuances of this concept for too long now. I hope some of you can help shedding light on a few things. It's not like my mixes are clipping left and right but I still need to understand some things.


- Where is it best to place a gain control plugin for every instrument mixer channel? The "to device" Insert FX slot on every mixer channel in the rack?

- Producers often talk about setting their static levels at -10dB, -12dB or -18dB. Sometimes I also see people referring to these levels as dBFS, but is there a functional difference between -12dB and -12dBFS when looking at the peak meter in a DAW?

- Is there a difference between adjusting instrument volume knobs and input gain knobs external to the instruments when setting the static level? If so, how to know when it's better to use one volume knob over the other?

- Some sounds don't always have static volume levels when holding a key or playing with varying velocity levels. Things like sweeps, release time, velocities, etc. can easily alter the volume over the duration of a sequencer bar. From the perspective of gain staging, should the static volume level be an average of these slight differences in volume or be based approximately around the loudest peak?

- Filter frequency or resonance knobs are commonly automated for adding builds and fades, but this can also alter the volume level being measured. How should the static level for gain staging purposes tackle such variances?

- Should gain staging compensate for volume added or removed during dynamics processing? For example, compression may add some volume and equalization may cut some volume. Does that require subsequent readjustment of the channel's gain plugin to compensate for the increase or loss in volume? Or is that why we want to have some headroom above our default static level, meaning that any volume changes happening during dynamics processing shouldn't be compensated for?


I've watched some videos and read some articles on this concept, which are good for basic understanding, but I always end up having the questions listed above. Thanks in advance and please keep it beginner friendly. :D
Gain staging is not relevant in the realm of digital music production, it is a technic used when you have analog gear. It is a myth that you need to gain stage in DAW's

Just use your ears instead of your eyes. Experiment with your different gear, there are no right or wrong when it come to music production. You can put the gain control plugin wherever you want or you can just skip it.

music production is a art form and not a
Should gain staging compensate for volume added or removed during dynamics processing? For example, compression may add some volume and equalization may cut some volume. Does that require subsequent readjustment of the channel's gain plugin to compensate for the increase or loss in volume? Or is that why we want to have some headroom above our default static level, meaning that any volume changes happening during dynamics processing shouldn't be compensated for?
Experiment, there are no right or wrong. If you want to compensate for the volume changes do it but remember to try to not do it.
What sounds best? That is the most important thing

General rules that are good:
Set the volume of the instrument to where it sounds best
Use as little compressors, eq's and other effects as possible try instead to find good sample or sound. Its better to spend 30 min finding the right sample/patch instead of spending 30 min trying to fix it with your compressor or EQ


When i read your text i get the feel that you are overthinking it. I would recommend you not to think about gain staging at all and just focus on listening to the mix,
and set the different tracks volume to the where they sound the best.

User avatar
selig
RE Developer
Posts: 11681
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

10 May 2018

nooomy wrote:Gain staging is not relevant in the realm of digital music production, it is a technic used when you have analog gear. It is a myth that you need to gain stage in DAW's

Just use your ears instead of your eyes. Experiment with your different gear, there are no right or wrong when it come to music production. You can put the gain control plugin wherever you want or you can just skip it.
There is no right and wrong, but there are general rules! [emoji6]

I totally agree gain staging is a holdover from analog gear, and I typically make that point in posts like this (but forgot to do so this time). Not one post on “gain staging” in a DAW actually mentions any gain stage techniques, mainly because they are not necessary!

I prefer to use terms like “ a consistent peak reference level” for all tracks instead of “gain staging” when speaking about working in all-digital systems. There is still a need to keep levels around the nominal levels expected by non-linear devices such as dynamics etc., so it’s still very relevant to be well aware of exactly what level your audio is at.

And as long as it’s possible to clip your outputs, that’s one more reason to be well aware of all levels at all times IMO.


Sent from some crappy device using Tapatalk
Selig Audio, LLC

Nielsen
Posts: 100
Joined: 05 Nov 2017
Location: Denmark

11 May 2018

selig wrote:
10 May 2018
I use Selig Gain to measure peak output level of instruments, and also to compensate for any added/subtracted gain from devices with no level controls (like Saturation Knob, for example). If you don't otherwise add or subtract gain with any added device in your signal flow, you won't need to measure at any other point. I often drag a single Gain around to "spot-measure" at various places if I need to check levels.
Dragging the plugin around is something I haven't thought about. Probably because I have developed a habit of using the gain plugin to set my static levels on individual channels. I'll try to set the initial volumes at instrument level going forward.
Nielsen wrote:
10 May 2018
Not exactly. Example: if you patch a drum machine into a compressor, carefully setup that compressor, then change the output level of the drum machine, you will change the way the compressor is reacting to the drum machine thus changing the amount of compression.
You will THEN need to go BACK to the compressor to adjust the input or threshold to correct for the level change. Same for any device that reacts differently to input levels, such as a gate/expander, saturation, or distortion device (or any device that includes this effect).

Small level changes may not affect the results, but IMO it's better to set levels at the source and then IF necessary to make later adjustments at the END of your signal path. Probably the best place other than the fader to make final adjustments is in the channel insert IF it is in the default (post EQ/Dynamics) position, and IF your gain device is the LAST in the chain. Otherwise, just use the fader since that will be "post everything" in the instrument signal path.

Sometimes you cannot avoid changing levels "upstream" from a compressor, such as adjusting a kick drum level that feeds a master drum bus with a compressor on it. In these cases you are armed with the knowledge that your changes MAY affect the total amount of compression, and to check the compressor settings to make sure they are still doing what you expect.
That makes sense. I should probably add that it's very rare that I compress and equalize for effect. For example, I use the channel compressors on the main mixer to balance unstable channels where the dynamic range doesn't benefit the mix. Same goes for equalization, meaning that I predominantly use the mixer's channel EQ to ensure more breathing space and less mud. I assume this doesn't change any of the advice you've given on setting initial volumes at the source and to always compensate for any volume changes post dynamics?
selig wrote:
10 May 2018
With regard to peak levels, there is ONE and only one value that represents the maximum peak achieved during playback for any point you measure. If you measure a peak level of -12 dBFS then make an adjustment that changes the level to -6 dBFS, you simply subtract 6 dB to return the peak level to - 12 dBFS.
And it's in a situation like this where the gain plugin usually would come into play to compensate, right?
selig wrote:
10 May 2018
The reason I use peak levels in this case is because of the goal of keeping the final output from clipping. It's the peak levels that clip first (obvious, when you think about it), so it's the peak levels that matter when trying to keep track of your overall output/mix level. If having a discussion of perceived loudness (a different subject completely), we would use other types of metering and have different concerns.

I have found that if you keep all individual channels peaking around -12 dBFS, your mix will tend to peak around -6 to -3 dBFS (which is the recommended level according to many mastering engineers). When mixing, some faders will stay up at unity (0 dB) but more will come down, with the end result of a mix that won't clip or even come close in most cases. The advantage to this approach is I can focus more on mixing and not on chasing faders to prevent clipping the output. Other advantages include keeping all levels around the point expected by devices that are level dependent (non-linear with regards to level, such as dynamics/saturation/distortion), which makes setting these device easier/quicker because they always see around the same level. Also, if you keep all peaks around -12 dBFS on all channels, you can quickly see whether any added FX is adding/subtracting gain or not!

The main goal is to adopt a system that allows you to work quickly and get expected results, thus minimizing unexpected detours and "mix-fixing" side trips.

In other words: "More mixing and less fixing".
:)
I have more or less noticed the same benefits when setting the static level at -12dBFS. Summary much appreciated.
nooomy wrote:
10 May 2018
Gain staging is not relevant in the realm of digital music production, it is a technic used when you have analog gear. It is a myth that you need to gain stage in DAW's

Just use your ears instead of your eyes. Experiment with your different gear, there are no right or wrong when it come to music production. You can put the gain control plugin wherever you want or you can just skip it.
It's not like I'm trying to tame the signal to noise ratio. I'm just trying to ensure that I'm approaching the static level technique in a way that makes sense. For the sake of consistency as it really helps avoiding situations where one production might not sound as good as the next one.
nooomy wrote:
10 May 2018
General rules that are good:
Set the volume of the instrument to where it sounds best
Use as little compressors, eq's and other effects as possible try instead to find good sample or sound. Its better to spend 30 min finding the right sample/patch instead of spending 30 min trying to fix it with your compressor or EQ
Like I said above in this post, it's rare that I use compressors and equalizers for effect. All my questions here revolve around not compromising things as I scrutinize how every single channel sits in the mix. Thanks anyway.
nooomy wrote:
10 May 2018
When i read your text i get the feel that you are overthinking it. I would recommend you not to think about gain staging at all and just focus on listening to the mix, and set the different tracks volume to the where they sound the best.
Maybe I am overthinking it but I really prefer to approach every project with a safety net. Gain staging or static level monitoring doesn't hurt one bit in the digital realm. In fact, it becomes easier to catch the mixing culprits before they arise and different productions will steer toward consistent outcomes. In my opinion there are few things more annoying in music production than realizing early mistakes at the end of the road. Setting static volume levels doesn't necessarily mean that ears aren't being used.

User avatar
selig
RE Developer
Posts: 11681
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

11 May 2018

My answer is "yes" to your two questions above.

Think of setting levels as you would have if recording to tape (analog or digital). Your "record" levels are your "static" levels, and should all fall within the confines of the recording medium.

But since our "medium" is now 32 bit floating point audio (speaking of software instruments, not external audio which is confined by your audio hardware specs), there is less need to conform to that standard. However, there is still great value in having an awareness and control over your audio levels IMO.
Selig Audio, LLC

EdGrip
Posts: 2343
Joined: 03 Jun 2016

11 May 2018

I see you, Nooomy, with your Afghan Girl eye. ;)

Image

User avatar
dioxide
Posts: 1779
Joined: 15 Jul 2015

11 May 2018

This is a good topic. I've recently been experimenting with EQ boosts as generally people advise on cutting only. The genres I make were typically made by amateurs and semi-amateurs on consumer gear, so there isn't the good practice that better studio engineers have. I'm kind of surprised at how low my source sounds need to be in order to take advantage of both boosting and cutting. The Reason SSL mixer can boost the signal by huge amounts as the frequencies are sweepable. Maybe a more amateur consumer-style mixer EQ is better suited to boosting as typically only the mid frequency can be adjusted as there is a lot of scope to cause clipping with an EQ as versatile as the SSL.

User avatar
selig
RE Developer
Posts: 11681
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

11 May 2018

dioxide wrote:This is a good topic. I've recently been experimenting with EQ boosts as generally people advise on cutting only. The genres I make were typically made by amateurs and semi-amateurs on consumer gear, so there isn't the good practice that better studio engineers have. I'm kind of surprised at how low my source sounds need to be in order to take advantage of both boosting and cutting. The Reason SSL mixer can boost the signal by huge amounts as the frequencies are sweepable. Maybe a more amateur consumer-style mixer EQ is better suited to boosting as typically only the mid frequency can be adjusted as there is a lot of scope to cause clipping with an EQ as versatile as the SSL.
In years of sitting behind some great engineers, I’ve never seen one that cuts only. You boost when you need to boost, and cut when you need to cut.


Sent from some crappy device using Tapatalk
Selig Audio, LLC

Nielsen
Posts: 100
Joined: 05 Nov 2017
Location: Denmark

13 May 2018

selig wrote:
11 May 2018
My answer is "yes" to your two questions above.
The advice to set the static level on the instrument does, however, make me wonder why I frequently see DAW users reaching for the channel gain knobs found at the very top the mixer console when setting static levels. What's the catch compared to the methods we've already discussed (gain plugin vs. instrument volume knobs for static levels)?
selig wrote:
11 May 2018
Probably the best place other than the fader to make final adjustments is in the channel insert IF it is in the default (post EQ/Dynamics) position, and IF your gain device is the LAST in the chain. Otherwise, just use the fader since that will be "post everything" in the instrument signal path.
I'd like to return to this point for a second. Are you saying it doesn't really matter whether post dynamics volume compensation is performed with a gain plugin connected to the channel's insert slot, or with the channel fader?

Which particular default position were you referring to and how do I ensure the gain plugin is last in the chain?

I just tried compensating for a 4 dB increase in the mid-high region of one channel. Upon subtracting the level back to -12dBFS, using the gain plugin auto-routed into the channel's to device insert FX slot, the balance fell apart. Probably no wonder since all frequency regions were affected by the compensation. Or am I totally missing something?
nooomy wrote:
10 May 2018
Set the volume of the instrument to where it sounds best
Probably a stupid question, but do instrument volume knobs alter the sound in any subtle way? Or was this simply referring to the dB level that sounds best irrespective of predefined static levels?

User avatar
selig
RE Developer
Posts: 11681
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

13 May 2018

Nielsen wrote:
selig wrote:
11 May 2018
My answer is "yes" to your two questions above.
The advice to set the static level on the instrument does, however, make me wonder why I frequently see DAW users reaching for the channel gain knobs found at the very top the mixer console when setting static levels. What's the catch compared to the methods we've already discussed (gain plugin vs. instrument volume knobs for static levels)?
As a long time SSL user, the only time I used the Channel Input trim was in the rare case where it was quicker to just trim the gain instead of automating the gain up/down. It was a “cheat” in some ways, but did the job.

The only time I use the Input Gain in Reason is on a bus channel IF if feel the need to trim the entire bus down prior to actually.

That’s not to say there’s anything wrong with using the Input Gain knob, it’s just that if you set your levels from the source there’s no need!
Nielsen wrote:
selig wrote:
11 May 2018
Probably the best place other than the fader to make final adjustments is in the channel insert IF it is in the default (post EQ/Dynamics) position, and IF your gain device is the LAST in the chain. Otherwise, just use the fader since that will be "post everything" in the instrument signal path.
I'd like to return to this point for a second. Are you saying it doesn't really matter whether post dynamics volume compensation is performed with a gain plugin connected to the channel's insert slot, or with the channel fader?
Technically it doesn’t matter, but I prefer to have everything where I want it before it hits the fader.

Now there’s no technical reason why you couldn’t have levels all over the place in a floating point system (at least to a degree), but I feel I can work so much quicker with consistent levels and not worry about clipping etc. It’s about making the job easier rather than any technical or sonic reason.
Nielsen wrote: Which particular default position were you referring to and how do I ensure the gain plugin is last in the chain?
The default routing, which has the insert last (after dynamics and EQ). You ensure the gain is the last device by putting it after everything else in the insert (not an issue if there’s nothing else in the insert), or to put it another way, it should have it’s output connected to the “From Devices” jacks in the insert section.
Nielsen wrote: I just tried compensating for a 4 dB increase in the mid-high region of one channel. Upon subtracting the level back to -12dBFS, using the gain plugin auto-routed into the channel's to device insert FX slot, the balance fell apart. Probably no wonder since all frequency regions were affected by the compensation. Or am I totally missing something?
Are you saying you measured the level after adding EQ and it was at -8dBFS, then lowered it back down to -12dBFS? Or that you added 4 dB of EQ (which is not going to add 4 dB overall gain in most cases)?

I would be surprised if adding 4 dB EQ didn’t change the balance in the first place - that much EQ can make a track that is lost in the mix come right out. So I’m not sure how you retained the mix balance when adding that much EQ, but lost the balances when compensating, unless the added EQ didn’t actually add 4 dB gain.

In case I’m totally wrong, then the solution is to break the rules and just do what sounds right (unless it makes your mix clip!). Despite a slight tendency towards being OCD when mixing, I still sometimes just do whatever works.
Nielsen wrote: Probably a stupid question, but do instrument volume knobs alter the sound in any subtle way? Or was this simply referring to the dB level that sounds best irrespective of predefined static levels?
I can say no gain control affects the sound. But keep in mind that if you have any non-linear process (dynamics/saturation/etc) inserted after the gain control, you CAN affect the sound of THAT processor when adjusting levels “upstream”.


Sent from some crappy device using Tapatalk
Selig Audio, LLC

User avatar
motuscott
Posts: 3418
Joined: 16 Jan 2015
Location: Contest Weiner

14 May 2018

Selig is a ReasonTalk treasure.
Who’s using the royal plural now baby? 🧂

Nielsen
Posts: 100
Joined: 05 Nov 2017
Location: Denmark

14 May 2018

selig wrote:
13 May 2018
As a long time SSL user, the only time I used the Channel Input trim was in the rare case where it was quicker to just trim the gain instead of automating the gain up/down. It was a “cheat” in some ways, but did the job.

The only time I use the Input Gain in Reason is on a bus channel IF if feel the need to trim the entire bus down prior to actually.

That’s not to say there’s anything wrong with using the Input Gain knob, it’s just that if you set your levels from the source there’s no need!
Ok, it basically comes down to using the method that benefits the preferred workflow.
selig wrote:
13 May 2018
Technically it doesn’t matter, but I prefer to have everything where I want it before it hits the fader.
Same here, but good to know nonetheless.
selig wrote:
13 May 2018
Now there’s no technical reason why you couldn’t have levels all over the place in a floating point system (at least to a degree), but I feel I can work so much quicker with consistent levels and not worry about clipping etc. It’s about making the job easier rather than any technical or sonic reason.
That's how my first productions came about but outcomes varied too much from track to track. That's one of the main reasons I prefer to control levels at every stage, in addition to reducing the risk of clipping of course. Also, developing a second nature approach to mixing that works should eventually help focusing efforts onto music writing while keeping technical concerns to a minimum.
selig wrote:
13 May 2018
The default routing, which has the insert last (after dynamics and EQ). You ensure the gain is the last device by putting it after everything else in the insert (not an issue if there’s nothing else in the insert), or to put it another way, it should have it’s output connected to the “From Devices” jacks in the insert section.
Gain plugin output to Insert FX "From Device" for last in chain, noted. I have already mentioned the "To Device" Insert FX slot a few times above, but I now see that it's used for the device positioned first in the Insert FX chain. Really helpful, much appreciated.
selig wrote:
13 May 2018
Are you saying you measured the level after adding EQ and it was at -8dBFS, then lowered it back down to -12dBFS? Or that you added 4 dB of EQ (which is not going to add 4 dB overall gain in most cases)?
I meant adding 4 dB in the high mid band of one channel. The gain plugin might then measure peaks around -10 dBFS instead of -12 dBFS. So upon subtracting the level back down to -12 dBFS post equalization (using the gain plugin last in Insert FX chain for trim and measurement), I find that the processed sound can become somewhat subdued in the mix, especially during parts where the -12 dBFS peaks don't stack up as much they do around the transients of rapid key changes. Perhaps I can control this better by automating the trim fader on Selig Gain?

Meanwhile, I have found that it helps to rebuild the mix by reintroducing each individual channel fader across the board, but maybe this step is to be expected after gain compensating for dynamics processing on a single channel?

The "high mid scenario" described here is only an example. I've experienced something similar after compensating for volume changes in other frequency bands too.

By the way I forgot to ask initially, should I also compensate for the volume changes I measure after high pass and low pass filtering? I'm rather convinced I probably should, but I just want to be sure that all corners are covered.
selig wrote:
13 May 2018
I would be surprised if adding 4 dB EQ didn’t change the balance in the first place - that much EQ can make a track that is lost in the mix come right out. So I’m not sure how you retained the mix balance when adding that much EQ, but lost the balances when compensating, unless the added EQ didn’t actually add 4 dB gain.
I tend to agree but I find that some sounds will breathe more freely when boosting the high mid band with about 4 dB gain in that region. Sometimes less can do it.
selig wrote:
13 May 2018
In case I’m totally wrong, then the solution is to break the rules and just do what sounds right (unless it makes your mix clip!). Despite a slight tendency towards being OCD when mixing, I still sometimes just do whatever works.
Right.
selig wrote:
13 May 2018
I can say no gain control affects the sound. But keep in mind that if you have any non-linear process (dynamics/saturation/etc) inserted after the gain control, you CAN affect the sound of THAT processor when adjusting levels “upstream”.
Ok, thanks for clarifying.

User avatar
selig
RE Developer
Posts: 11681
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

14 May 2018

Nielsen wrote:
selig wrote:
13 May 2018
Are you saying you measured the level after adding EQ and it was at -8dBFS, then lowered it back down to -12dBFS? Or that you added 4 dB of EQ (which is not going to add 4 dB overall gain in most cases)?
I meant adding 4 dB in the high mid band of one channel. The gain plugin might then measure peaks around -10 dBFS instead of -12 dBFS. So upon subtracting the level back down to -12 dBFS post equalization (using the gain plugin last in Insert FX chain for trim and measurement), I find that the processed sound can become somewhat subdued in the mix, especially during parts where the -12 dBFS peaks don't stack up as much they do around the transients of rapid key changes. Perhaps I can control this better by automating the trim fader on Selig Gain?
Not trying to be a smart ass with this response, but when you make an adjustment that causes a track to not be loud enough in the mix, turn it up (with the fader, which is where you “balance” your mix IMO).

Sometimes if you make an EQ change that adds gain, we tend to feel it sounds better - because it’s louder. But once you return it to it’s original level you’re getting a more accurate idea of whether the EQ change is actually helping or just making the track louder.

I’m not necessarily suggesting this is what happened in your case (too many variables, plus I’m not hearing your mix so can’t directly comment). I’m only sharing what I’ve observed to give you more data points - if what worked for me doesn’t work for you, then keep looking for other solutions. There are MANY ways to achieve the same general result IMO.
Nielsen wrote: Meanwhile, I have found that it helps to rebuild the mix by reintroducing each individual channel fader across the board, but maybe this step is to be expected after gain compensating for dynamics processing on a single channel?

The "high mid scenario" described here is only an example. I've experienced something similar after compensating for volume changes in other frequency bands too.

By the way I forgot to ask initially, should I also compensate for the volume changes I measure after high pass and low pass filtering? I'm rather convinced I probably should, but I just want to be sure that all corners are covered.
In general I’d say yes. In my workflow, filtering comes well before EQ since it’s more “clean up” work which I tend to do earlier in the process. I consider clean up work to be more generic, more along the lines of things you would do no matter what other elements were present in the mix. EQ, in contrast, in my workflow is done to help parts fit better with each other, which is more contextual and cannot always be determined until all mix elements are prepped and sitting at good basic levels. In other words, I use faders to get the best possible mix BEFORE I resort to EQ (in most cases).
Nielsen wrote: I tend to agree but I find that some sounds will breathe more freely when boosting the high mid band with about 4 dB gain in that region. Sometimes less can do it.
I wasn’t meaning to say there’s anything wrong with that amount of EQ - only suggesting 4 dB of EQ rarely means 4 dB of additional gain overall. Plus, I was commenting that adding any amount of a wide band can give a similar effect to increasing the overall level, that is to say it can affect the mix balances.

Along those lines, when I’m done balancing faders and still feel one track is “lost” in the mix, I often turn to EQ to find the one band that when boosted, has the effect of bringing that track forward (but won’t add as much gain as increasing the fader would add). Make sense?
[emoji3]


Sent from some crappy device using Tapatalk
Selig Audio, LLC

Nielsen
Posts: 100
Joined: 05 Nov 2017
Location: Denmark

14 May 2018

selig wrote:
14 May 2018
Not trying to be a smart ass with this response, but when you make an adjustment that causes a track to not be loud enough in the mix, turn it up (with the fader, which is where you “balance” your mix IMO)
Then why compensate with the gain plugin to begin with? Just to evaluate the volume change at the static level? Otherwise, aren't we basically adding a redundant step by doing the following?

- Let's say 4 dB are added in the high mid EQ band.
- Gain plugin now measures -10dBFS peaks instead of -12 dBFS (static level)
- Gain plugin trim is used to return peak level to -12 dBFS
- Processed sound now sits poorly in the mix due to lowering the entire frequency spectrum by 2 dB
- Turn up the channel volume fader to compensate again

Aren't step three and five basically adding two unnecessary steps since the result ends up being roughly the same as if both steps were skipped. Or am I way off here?
selig wrote:
14 May 2018
Sometimes if you make an EQ change that adds gain, we tend to feel it sounds better - because it’s louder. But once you return it to it’s original level you’re getting a more accurate idea of whether the EQ change is actually helping or just making the track louder.
True, but I fail to see the alternative when the gain change is isolated to a single frequency band. Overall volume and gain adjustments will change the level of the entire frequency spectrum, but boosting or cutting one frequency band can improve its presence within the mix.
selig wrote:
14 May 2018
In general I’d say yes. In my workflow, filtering comes well before EQ since it’s more “clean up” work which I tend to do earlier in the process. I consider clean up work to be more generic, more along the lines of things you would do no matter what other elements were present in the mix. EQ, in contrast, in my workflow is done to help parts fit better with each other, which is more contextual and cannot always be determined until all mix elements are prepped and sitting at good basic levels. In other words, I use faders to get the best possible mix BEFORE I resort to EQ (in most cases).
Agreed, except I usually don't touch faders all that much until I'm happy with my filtering, stereo image, compression, gates and EQ. I take this approach to preserve fader precision around unity gain until the very end. Perhaps this is why the gain compensation technique is causing me a bit of trouble with the balance post equalization?
selig wrote:
14 May 2018
I wasn’t meaning to say there’s anything wrong with that amount of EQ - only suggesting 4 dB of EQ rarely means 4 dB of additional gain overall. Plus, I was commenting that adding any amount of a wide band can give a similar effect to increasing the overall level, that is to say it can affect the mix balances.
Just to clear any doubt - I didn't intend to suggest that a 4 dB change in one frequency band equals a 4 dB change across the entire frequency spectrum. I always look to the master fader or gain plugin to confirm overall volume changes.
selig wrote:
14 May 2018
Along those lines, when I’m done balancing faders and still feel one track is “lost” in the mix, I often turn to EQ to find the one band that when boosted, has the effect of bringing that track forward (but won’t add as much gain as increasing the fader would add). Make sense?
It makes sense. Am I correct in understanding that you're almost using equalization as a last resort? I get the idea that I'm boosting or cutting frequencies where you typically would first balance with faders. I'm not suggesting one method is better than the other, but merely that I probably shouldn't compensate for the gain change post equalization unless I'm prepared to compensate once more with the channel fader.
Last edited by Nielsen on 14 May 2018, edited 3 times in total.

jimmyklane
Posts: 740
Joined: 16 Apr 2018

14 May 2018

selig wrote:
10 May 2018
nooomy wrote:Gain staging is not relevant in the realm of digital music production, it is a technic used when you have analog gear. It is a myth that you need to gain stage in DAW's

Just use your ears instead of your eyes. Experiment with your different gear, there are no right or wrong when it come to music production. You can put the gain control plugin wherever you want or you can just skip it.
There is no right and wrong, but there are general rules! [emoji6]

I totally agree gain staging is a holdover from analog gear, and I typically make that point in posts like this (but forgot to do so this time). Not one post on “gain staging” in a DAW actually mentions any gain stage techniques, mainly because they are not necessary!

I prefer to use terms like “ a consistent peak reference level” for all tracks instead of “gain staging” when speaking about working in all-digital systems. There is still a need to keep levels around the nominal levels expected by non-linear devices such as dynamics etc., so it’s still very relevant to be well aware of exactly what level your audio is at.

And as long as it’s possible to clip your outputs, that’s one more reason to be well aware of all levels at all times IMO.


Sent from some crappy device using Tapatalk
There is still nothing wrong with setting the input gains on the “SSL” to drive the channel to a -12 peak point, you simply have to think in dBFS instead of dBU...+4dBu means nothing inside Reason, but if you’re going to come in and out of the program to hardware, then a proper reference level is actually vital. I’m set to +4dBu = -12 dBFS, and use the input gains to get each channel to a level so that when I use the Aux sends to hardware outputs the default -12 setting actually sends +4dBu to my reverbs and processors, etc.
DAW: Reason 12

SAMPLERS: Akai MPC 2000, E-mu SP1200, E-Mu e5000Ultra, Ensoniq EPS 16+, Akai S950, Maschine

SYNTHS: Mostly classic Polysynths and more modern Monosynths. All are mostly food for my samplers!

www.soundcloud.com/jimmyklane

User avatar
selig
RE Developer
Posts: 11681
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

14 May 2018

Nielsen wrote:
selig wrote:
14 May 2018
Not trying to be a smart ass with this response, but when you make an adjustment that causes a track to not be loud enough in the mix, turn it up (with the fader, which is where you “balance” your mix IMO)
Then why compensate with the gain plugin to begin with? Just to evaluate the volume change at the static level? Otherwise, aren't we basically adding a redundant step by doing the following?

- Let's say 4 dB are added in the high mid EQ band.
- Gain plugin now measures -10dBFS peaks instead of -12 dBFS (static level)
- Gain plugin trim is used to return peak level to -12 dBFS
- Processed sound now sits poorly in the mix due to lowering the entire frequency spectrum by 2 dB
- Turn up the channel volume fader to compensate again

Step three and five basically adding two unnecessary steps since the result ends up being roughly the same as if both steps were skipped. Or am I way off here?
You’re not way off, but it shows that adding EQ doesn’t always give an overall improvement once you remove the added gain. No problem adding 2 dB gain IMO, I don’t get THAT anal about levels especially towards the end of a mix, so I’m defining not saying you need to compensate for EVERY single decibel!

BTW: You are not lowering the entire spectrum by 2 dB because you added 4 dB EQ which resulted in a 2 dB hotter level overall. What if you instead cut the lower frequencies by 4 dB (instead of boosting the upper frequencies by 4 dB), resulting in the same response curve in the end? You would likely need to boost the level to get back to where you were before, but the results would likely be the same. But maybe you would “feel” better about it because you were adding gain instead of subtracting it?

In the end, I remind myself it’s easy to be fooled by adding overall gain, whether by EQ or by compression or whatever. In your case, I’m guessing your track needed more level to sit correctly in the mix, and IMO you can add it with EQ or with a fader (again, not knowing the specific example you’re siting may be causing me to make erroneous conclusions, so take this with a grain of salt!).
Nielsen wrote:
selig wrote:
14 May 2018
Sometimes if you make an EQ change that adds gain, we tend to feel it sounds better - because it’s louder. But once you return it to it’s original level you’re getting a more accurate idea of whether the EQ change is actually helping or just making the track louder.
True, but I fail to see the alternative when the gain change is isolated to a single frequency band. Overall volume and gain adjustments will change the level of the entire frequency spectrum, but boosting or cutting one frequency band can improve its presence relative to the static level.
Boosting one band is simply changing the level of something less than the entire frequency spectrum. It’s still gain, just affecting fewer frequencies!

To be clear, I boost bands to improve clarity all the time because at a certain point in the mix process it’s more efficient to focus the energy at the most effective point. I’m sure what you’re doing is absolutely fine - only change your workflow if you’re not satisfied with results (not because someone such as myself does it differently). Many paths to the top of the mountain, yada yada, and Yoda too!
Nielsen wrote:
selig wrote:
14 May 2018
In general I’d say yes. In my workflow, filtering comes well before EQ since it’s more “clean up” work which I tend to do earlier in the process. I consider clean up work to be more generic, more along the lines of things you would do no matter what other elements were present in the mix. EQ, in contrast, in my workflow is done to help parts fit better with each other, which is more contextual and cannot always be determined until all mix elements are prepped and sitting at good basic levels. In other words, I use faders to get the best possible mix BEFORE I resort to EQ (in most cases).
Agreed, except I usually don't touch faders all that much until I'm happy with my filtering, stereo image, compression, gates and EQ. I take this approach to preserve fader precision around unity gain until the very end. Perhaps this is why the gain compensation technique is causing me a bit of trouble with the balance?
OK, think about this “fader precision” bit above. Exactly how much fader precious do you need? In other words, what is the smallest amount of gain adjustment you can hear or you typically use when mixing?

Because when I ask that question, most folks say somewhere between 0.1 dB (few can hear this small of an adjustment) to 1.0 dB. Speaking for myself, I don’t often make finer adjustments than 0.5 dB or so, while on occasion I may feel I need 0.25 dB resolution.

Now look at the faders in Reason, and tell me how far down you can move them and STILL have the desired about of “resolution”. Let’s say you need 0.25 dB maximum fader resolution, which is really quite “fine” by most standards. How far can you move the fader before you cannot achieve this level of resolution? How about 0.1 dB resolution?

Turns out you can get 0.1 dB resolution down to -29 dB on Reason’s faders. Doesn’t sound like much? It’s over 3/4 of the way to the bottom. In fact, you get 0.25 dB resolution all the way down to below -55 dB, which is where the fader is over around 90% to the bottom. To get 0.5 dB resolution (which is what I typically use) you can go all the way to -70 dB, where the fader almost touches the bottom and covers the “screw” in the panel graphic. Something to think about when you’re worried about not having enough fader “resolution”…
;)
Nielsen wrote:
selig wrote:
14 May 2018
I wasn’t meaning to say there’s anything wrong with that amount of EQ - only suggesting 4 dB of EQ rarely means 4 dB of additional gain overall. Plus, I was commenting that adding any amount of a wide band can give a similar effect to increasing the overall level, that is to say it can affect the mix balances.
just to clear any doubt - I didn't intend to suggest that a 4 dB change in one frequency band equals a 4 dB change across the entire frequency spectrum. I always look to the master fader or gain plugin to confirm overall volume changes.
selig wrote:
14 May 2018
Along those lines, when I’m done balancing faders and still feel one track is “lost” in the mix, I often turn to EQ to find the one band that when boosted, has the effect of bringing that track forward (but won’t add as much gain as increasing the fader would add). Make sense?
It make sense. Am I correct in understanding that you're almost using equalization as a last resort? I get the idea that I'm boosting or cutting frequencies at a point where you typically would balance with faders. I'm not suggesting one method is better than the other, but merely that I probably shouldn't compensate for the gain change post equalization unless I'm prepared to reset the balance with channel faders.
I’m not using EQ as a last resort, I’m just exhausting every other possibility before using it! Then I use it freely and boost or cut as much as needed without worrying if it “looks” like it may be too much. In this way I can be more confident the EQ I’m using is the best solution to whatever problem I’m hearing.



Sent from some crappy device using Tapatalk
Selig Audio, LLC

jimmyklane
Posts: 740
Joined: 16 Apr 2018

14 May 2018

True, but I fail to see the alternative when the gain change is isolated to a single frequency band. Overall volume and gain adjustments will change the level of the entire frequency spectrum, but boosting or cutting one frequency band can improve its presence relative to the static level.


THIS is where EQ ***before*** compression really comes in handy, because you can EQ quite heavily and then even out the volume spikes while still retaining the “flavor” that you needed to get by EQing in the first place. Sometimes sending that track to another bus and EQing again after the compressor can work really well.

EQing in parallel also works very well in situations where you want a track to have “air” but not get harsh....parallel channel, HP filter a bit and boost 16k by 6dB and mix it in under the main track.
DAW: Reason 12

SAMPLERS: Akai MPC 2000, E-mu SP1200, E-Mu e5000Ultra, Ensoniq EPS 16+, Akai S950, Maschine

SYNTHS: Mostly classic Polysynths and more modern Monosynths. All are mostly food for my samplers!

www.soundcloud.com/jimmyklane

jimmyklane
Posts: 740
Joined: 16 Apr 2018

14 May 2018

And Selig, I agree with you about EQ....In a song with 48 tracks 6 might have EQ, but 30 will have HP/LP engaged.

I like to get my phase smearing in the analog domain whenever possible.
DAW: Reason 12

SAMPLERS: Akai MPC 2000, E-mu SP1200, E-Mu e5000Ultra, Ensoniq EPS 16+, Akai S950, Maschine

SYNTHS: Mostly classic Polysynths and more modern Monosynths. All are mostly food for my samplers!

www.soundcloud.com/jimmyklane

User avatar
selig
RE Developer
Posts: 11681
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

14 May 2018

jimmyklane wrote:
14 May 2018
selig wrote:
10 May 2018


There is no right and wrong, but there are general rules! [emoji6]

I totally agree gain staging is a holdover from analog gear, and I typically make that point in posts like this (but forgot to do so this time). Not one post on “gain staging” in a DAW actually mentions any gain stage techniques, mainly because they are not necessary!

I prefer to use terms like “ a consistent peak reference level” for all tracks instead of “gain staging” when speaking about working in all-digital systems. There is still a need to keep levels around the nominal levels expected by non-linear devices such as dynamics etc., so it’s still very relevant to be well aware of exactly what level your audio is at.

And as long as it’s possible to clip your outputs, that’s one more reason to be well aware of all levels at all times IMO.


Sent from some crappy device using Tapatalk
There is still nothing wrong with setting the input gains on the “SSL” to drive the channel to a -12 peak point, you simply have to think in dBFS instead of dBU...+4dBu means nothing inside Reason, but if you’re going to come in and out of the program to hardware, then a proper reference level is actually vital. I’m set to +4dBu = -12 dBFS, and use the input gains to get each channel to a level so that when I use the Aux sends to hardware outputs the default -12 setting actually sends +4dBu to my reverbs and processors, etc.
OF course not, but my point is that if you practice consistent levels from the source, there's no need to adjust at the input because the level will already be exactly where you want it.

Not sure where dBu comes from in this context, as we're talking peak levels below 0 dBFS in this thread, right?

Just curious - In your situation wouldn't it make more sense to keep all levels at the desired point, rather than only concern yourself with levels once they enter the SSL mixer? Meaning, if all levels are kept at the same/desired point, you won't have to make adjustments in the mixer?
Selig Audio, LLC

User avatar
selig
RE Developer
Posts: 11681
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

14 May 2018

jimmyklane wrote:
14 May 2018
True, but I fail to see the alternative when the gain change is isolated to a single frequency band. Overall volume and gain adjustments will change the level of the entire frequency spectrum, but boosting or cutting one frequency band can improve its presence relative to the static level.


THIS is where EQ ***before*** compression really comes in handy, because you can EQ quite heavily and then even out the volume spikes while still retaining the “flavor” that you needed to get by EQing in the first place. Sometimes sending that track to another bus and EQing again after the compressor can work really well.

EQing in parallel also works very well in situations where you want a track to have “air” but not get harsh....parallel channel, HP filter a bit and boost 16k by 6dB and mix it in under the main track.
EQ into compression is fine, but note that it likely will change the amount of compression which may or may not be desirable. Also, not all compressor settings will "even out the volume spikes" - this statement assumes a fast attack and high ratio, more akin to limiting than compression, correct?

How is parallel EQ different from just using less EQ in the first place? Yes, multiple bands of parallel EQ (internally) will interact differently, but this is not the same thing at all as putting a serial EQ on a parallel channel…

Also, boosting at 16 kHz can have extremely different results depending on the type of EQ used. Some EQs will hardly do anything set to 16 kHz, while others may affect the spectrum many octaves below 16 kHz. This comes up again and again when folks set two EQs to the same frequency and notice they don't sound the same, and assume it's because of some different "magic" used by one or the other. In most cases it's because the resultant curves look TOTALLY different when set to the same parameters.
Selig Audio, LLC

User avatar
selig
RE Developer
Posts: 11681
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

14 May 2018

Parallel EQ (SSL EQ) comparison:

+ 6 dB @ 16 kHz on a parallel channel exactly equals +3.5 dB @ 16 kHz on a single channel.

So if you want "less EQ", why not just use less EQ?

NOTE: when using parallel "anything" you're adding 6 dB if both channels are equal, so you need to compensate accordingly so as not to be fooled by the additional gain.

OK, so now, what if we lower the parallel channel by say, 6 dB, what is that going to give us? Turns out it's the same as setting the signal channel EQ to a boost of around + 2 dB (SSL won't give me exact enough values to match it precisely, but it's awful close!).

In other words, I'm not seeing any case where an EQ on a parallel channel does anything other than gives you less boost/cut while adding more overall gain. Now if you're ALSO adding compression, saturation, distortion or similar, you'll get results you can't necessarily get otherwise but it's down to the non-linear processing rather than the EQ (and the results may be very similar to what you'd get using a single channel in many cases, depending on settings of course).
Selig Audio, LLC

Post Reply
  • Information
  • Who is online

    Users browsing this forum: No registered users and 2 guests