Do you work with high dynamics?

Have an urge to learn, or a calling to teach? Want to share some useful Youtube videos? Do it here!
Post Reply
RobC
Posts: 1833
Joined: 10 Mar 2018

07 Sep 2022

So far, from the looks of it, I STILL was affected by the loudness war, eventhough I already avoid needless dynamics processing, just to get "louder".

See, I mistakenly set my system volume levels quite low to begin with, since mastered songs, but even samples are crazy loud. Same with Windows system sounds.

This resulted in that I barely turned my volume louder when designing sounds, leaving me with basically limited dynamic range to begin with. I was left wondering, why does a regular ADSR volume envelope sound so weak; or if I set a punchy envelope, and start pushing the volume on a synth, to compensate, why do I hit the reds; etc.

Well, now I have the answer. Because I feared turning the volume up. (Although depends how noisy and distorted your system gets.)

I also feared, that a highly dynamic sound will lose its character when getting compressed later on. Well, it does, unless we instead shape the sound with saturation for example, thereby turning the lost/clipped dynamics into harmonics.

Anyway, a sound can be shaped so much better at higher dynamic range / higher amplified volume. It can make sound design so much easier.

I wonder if anyone else fell into this trap of being afraid of super loud samples, thus working at minimal dynamic range to begin with?

If you do work at highly amplified system volumes*, how do you keep yourself safe from a sudden loud sample or song, or even system sound accidentally blasting your ears?

*By that I mean, for example once my DAC and HP Amp gets here, both will be at 100% volume, noise and distortion free, which will easily cover the 116 dB dynamic range; meanwhile in Reason, say if we load up a synth, then its volume has to be way down at first - but we have a ton of headroom to mess around with.

User avatar
mcatalao
Competition Winner
Posts: 1826
Joined: 17 Jan 2015

08 Sep 2022

If samples are too high then that's a problem with the sample not the way you're working imho.
If I recall correctly, when i did some sound design i was asked to keep peak levels at -6 dbfs (and usually i even mix on lower peaks). From there dynamics depends a lot on the ADSR of the different sound components.

As for windows samples I'd say they are already mastered, but imho the only sound you need your computer to do is from your DAW, and other music and video apps. Kill your windows sounds, and just take care of levels on the different applications. Be aware thought, that when you compare your masters to other apps, you should put everything at unity. Pull reason's master fader to 0, and put every other app at maximium volume, and pull you audiocard level down.

I prefer to mix at low levels, and use the audiocard volume button. It's there for some reason, despite we're working at full scale in digital domain.

Cheers,
MC

User avatar
Loque
Moderator
Posts: 11175
Joined: 28 Dec 2015

08 Sep 2022

I am working with around -10 to -12db headroom. My speakers are set to full in OS and quite loud in the speaker output (soundcard). And finally i have all other sound sources off, that means for example no system sounds - they are just freaking annoying at all.

My patches are mostly all around -10 to -12 too, so i can integrate them quite easily.

But for sure i dont understand fully all the relations in the final mix. As an example a few days ago i added a HPF to to cut out the low end sh!t and to make room in the final mix, but i ended up with a significant louder sound. The sound was not louder at all, but peaks much higher. I speculate that the phase made some peaks somewhere i did not heard, but even way other phase models it was way too loud (measurable but not audible).

And all that stuff with summing up, especially with low frequencies and filling up space is still all over my head...I whish i would be smarter and this way i could have a fuller, louder and cleaner mix :-)

Sometimes i forget to focus on music, producing and arranging by all that mixing sh!t...Maybe i just should add a limiter at the end and blow all in i want :-D
Reason12, Win10

User avatar
selig
RE Developer
Posts: 11685
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

08 Sep 2022

When I did patches for 6.5 and for Friction we were asked to keep peaks around -12 dBFS, which is the ‘Reason Standard’ as far as I can tell. Another example: the audio meter when recording audio tracks shows green LEDS below -12 dBFS and yellow above. Yet another example, VU Offset defaults to -12 dBFS.

Turns out if all channels have a peak around -12 dBFS, your mix levels are going to be safely below 0 dBFS without further adjustments. I’ve long adopted this way of working and am known to go on and on about it if given the chance… ;)
Selig Audio, LLC

User avatar
selig
RE Developer
Posts: 11685
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

08 Sep 2022

Loque wrote:
08 Sep 2022
…But for sure i dont understand fully all the relations in the final mix. As an example a few days ago i added a HPF to to cut out the low end sh!t and to make room in the final mix, but i ended up with a significant louder sound. The sound was not louder at all, but peaks much higher. I speculate that the phase made some peaks somewhere i did not heard, but even way other phase models it was way too loud (measurable but not audible)…
https://www.soundonsound.com/sound-advi ... eem-louder
Selig Audio, LLC

User avatar
selig
RE Developer
Posts: 11685
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

08 Sep 2022

Rob C:
In some ways you can bring out the character of a sound with compression, such as revealing low level detail!

But I digress…there are two ways to increase the dynamic range of your audio system. One is to indeed turn up the volume. This is also a ‘cure’ for mixes that keep getting louder because you keep turning everything up, simple as it is.

The other way to increase the dynamic range of your system is to work on the other end of the problem, the noise floor of the listening room. There is only so much “turn it up” you can do before ear fatigue will set in. But by listening in a space with a low noise floor you can increase the dynamic range significantly. In fact, it is only in the more quite rooms that engineers say the difference between 16 and 24 bit files can potentially be heard (and even then it can be subtle). But most folks don’t have such a quite listening environment, so it’s not essential as a delivery format.

Real world example - in my home studio I can measure around 30 dBA at the quietest. When the A/C and computer fan kick in, the ambient noise goes up to 40 dBA or higher. These measurements are taken with iPhone apps which can vary by a few dB, so use these numbers for comparison purposes only. C weighted measure higher because low frequencies are taken into effect.
My studio is 1/4 mile away from the closest road in a rural area, but one of the roads is a scenic road so we get motercycle “gangs” (mostly guys my age!) that can fill the entire valley with quite a bit of sound! So I normally don’t need a ton of isolation, but I did install spray foam insulation in walls/ceilings so the windows/doors/skylights are certainly the weak link!
Selig Audio, LLC

User avatar
Loque
Moderator
Posts: 11175
Joined: 28 Dec 2015

08 Sep 2022

selig wrote:
08 Sep 2022
Loque wrote:
08 Sep 2022
…But for sure i dont understand fully all the relations in the final mix. As an example a few days ago i added a HPF to to cut out the low end sh!t and to make room in the final mix, but i ended up with a significant louder sound. The sound was not louder at all, but peaks much higher. I speculate that the phase made some peaks somewhere i did not heard, but even way other phase models it was way too loud (measurable but not audible)…
https://www.soundonsound.com/sound-advi ... eem-louder
Hum...thanks for the link. The explanation is what my speculation was. But still no idea what the best thing i can do here is...Leave the low end? Reduce the peak? Compress? Limit? Ignore? Try to find the phase response point and lower the EQ there? ...?
Reason12, Win10

PhillipOrdonez
Posts: 3732
Joined: 20 Oct 2017
Location: Norway
Contact:

08 Sep 2022

You can try using a low shelf instead of hpf. Or leaving the filter but soft clipping after instead. Limiting/compressing/distorting is another option... I'm sure Selig can come up with even more options. These are the ones I would try first.

User avatar
Loque
Moderator
Posts: 11175
Joined: 28 Dec 2015

08 Sep 2022

PhillipOrdonez wrote:
08 Sep 2022
... I'm sure Selig can come up with even more options....
I hope :-D

I was quite shocked that the result was louder even i didnt heard or noticed any big changes. That also made me thought about to use the underlaying "problem" in a "reverse way" to lower the gain without any audible changes... :? ;)
Reason12, Win10

PhillipOrdonez
Posts: 3732
Joined: 20 Oct 2017
Location: Norway
Contact:

08 Sep 2022

I think if filter designers could fix that issue without any audible changes, they would have already implemented it in every eq and filter available, Loque 😅

User avatar
selig
RE Developer
Posts: 11685
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

08 Sep 2022

Loque wrote:
08 Sep 2022
PhillipOrdonez wrote:
08 Sep 2022
... I'm sure Selig can come up with even more options....
I hope :-D

I was quite shocked that the result was louder even i didnt heard or noticed any big changes. That also made me thought about to use the underlaying "problem" in a "reverse way" to lower the gain without any audible changes... :? ;)
Besides Phillips excellent suggestions, try inverting the polarity on the channel with the HP filter. You can also use an all pass filter to address the phase shift caused by the filter. Sweeper can be used for this BTW.
You can also try reducing Q if possible, or using a less steep filter - 6 dB/Octave (1 pole) filters are VERY gentle with regards to phase, and likely won’t add any level change.
And finally, you can just leave the channel ‘as is’ and enjoy a lower peak level!
Of those, I would personally start with a gentle low shelf, then 6 dB/Oct filter. If I was using my own ColoringEQ, I would also try filter slopes below 6 dB/Oct which have even less phase shift but won’t reduce the low end as much as the other solutions, and the low shelf with lowest Q (which is a 6 dB/Oct Shelf).
Selig Audio, LLC

RobC
Posts: 1833
Joined: 10 Mar 2018

09 Sep 2022

mcatalao wrote:
08 Sep 2022
If samples are too high then that's a problem with the sample not the way you're working imho.
If I recall correctly, when i did some sound design i was asked to keep peak levels at -6 dbfs (and usually i even mix on lower peaks). From there dynamics depends a lot on the ADSR of the different sound components.

As for windows samples I'd say they are already mastered, but imho the only sound you need your computer to do is from your DAW, and other music and video apps. Kill your windows sounds, and just take care of levels on the different applications. Be aware thought, that when you compare your masters to other apps, you should put everything at unity. Pull reason's master fader to 0, and put every other app at maximium volume, and pull you audiocard level down.

I prefer to mix at low levels, and use the audiocard volume button. It's there for some reason, despite we're working at full scale in digital domain.

Cheers,
MC
It's rather the over-compressed samples, that I have a problem with - even though I normally prefer to just create my own, too.
Yeah, I think I'll silence Windows. And maybe I should find some good app that pulls the volume of loud music back, because it's really annoying that I almost don't listen to music, because of what the loudness war has done.
Good thing you mentioned ADSR like that, since I've wondered that maybe using different envelopes for different frequency bands might be more flexible.

It could be that my wording wasn't the best, as what I meant by high dynamics was that a mix, when normalized, would sound pretty quiet, actually, compared to modern masters.
Loque wrote:
08 Sep 2022
I am working with around -10 to -12db headroom. My speakers are set to full in OS and quite loud in the speaker output (soundcard). And finally i have all other sound sources off, that means for example no system sounds - they are just freaking annoying at all.

My patches are mostly all around -10 to -12 too, so i can integrate them quite easily.

But for sure i dont understand fully all the relations in the final mix. As an example a few days ago i added a HPF to to cut out the low end sh!t and to make room in the final mix, but i ended up with a significant louder sound. The sound was not louder at all, but peaks much higher. I speculate that the phase made some peaks somewhere i did not heard, but even way other phase models it was way too loud (measurable but not audible).

And all that stuff with summing up, especially with low frequencies and filling up space is still all over my head...I whish i would be smarter and this way i could have a fuller, louder and cleaner mix :-)

Sometimes i forget to focus on music, producing and arranging by all that mixing sh!t...Maybe i just should add a limiter at the end and blow all in i want :-D
Yeah, it seems that I've set my system up slightly wrong in the past then. Oh well, at least this will open up some extra possibilities.

I've seen Selig's link - it's quite surprising to see how EQing works 'under the hood', how frequencies somehow can cancel themselves out apparently. I thought it would just be the filters' ringing.

I saw a video about loudness on the Audio Science Review youtube channel. It's quite a wake up call that since we hear bass frequencies really poorly, it's the obvious reason why we mix it so much louder than other frequencies. When it comes to our ears' sensitivity, the difference is really huge.
If the mix is well balanced, then it will be louder. If we speak about dBFS, then to get louder, it comes at the cost of reducing dynamic range. I still think, the best way is to make a mix and master sound as good as possible, and not worry much about the digital loudness.

Err, yeah, I haven't made a single song on Reason 10 for example x D - but that's because the engineering part, plugin and sound design got me much more interested. (I had to skip 11, but now I'm on 12, time to make some music sometime.)
Squashing the mix to death reveals cool dynamic changes. I want to experiment with that, and then later simply copy the cool sounding ones, and simply apply them to my mix with fader automation, or similar.
selig wrote:
08 Sep 2022
When I did patches for 6.5 and for Friction we were asked to keep peaks around -12 dBFS, which is the ‘Reason Standard’ as far as I can tell. Another example: the audio meter when recording audio tracks shows green LEDS below -12 dBFS and yellow above. Yet another example, VU Offset defaults to -12 dBFS.

Turns out if all channels have a peak around -12 dBFS, your mix levels are going to be safely below 0 dBFS without further adjustments. I’ve long adopted this way of working and am known to go on and on about it if given the chance… ;)
I kind of understand that, but I don't really get why the volume of patches and samples aren't leveled to a reference sound, so that they are more or less equally loud.

That makes sense, since it would take 4 perfectly identical digital sound waves to be at 0 dB - which is not likely to happen with a mix, so it's a safe limit.
selig wrote:
08 Sep 2022
Rob C:
In some ways you can bring out the character of a sound with compression, such as revealing low level detail!

But I digress…there are two ways to increase the dynamic range of your audio system. One is to indeed turn up the volume. This is also a ‘cure’ for mixes that keep getting louder because you keep turning everything up, simple as it is.

The other way to increase the dynamic range of your system is to work on the other end of the problem, the noise floor of the listening room. There is only so much “turn it up” you can do before ear fatigue will set in. But by listening in a space with a low noise floor you can increase the dynamic range significantly. In fact, it is only in the more quite rooms that engineers say the difference between 16 and 24 bit files can potentially be heard (and even then it can be subtle). But most folks don’t have such a quite listening environment, so it’s not essential as a delivery format.

Real world example - in my home studio I can measure around 30 dBA at the quietest. When the A/C and computer fan kick in, the ambient noise goes up to 40 dBA or higher. These measurements are taken with iPhone apps which can vary by a few dB, so use these numbers for comparison purposes only. C weighted measure higher because low frequencies are taken into effect.
My studio is 1/4 mile away from the closest road in a rural area, but one of the roads is a scenic road so we get motercycle “gangs” (mostly guys my age!) that can fill the entire valley with quite a bit of sound! So I normally don’t need a ton of isolation, but I did install spray foam insulation in walls/ceilings so the windows/doors/skylights are certainly the weak link!
True, but it's not necessarily possible to bring out much if the given sound is too clean. Think synthesizing a kick based on a sine wave, various envelopes, and nothing else. And the other thing is, do we really need to use compressors, if (volume envelope) ADSR basically is like (A) a downwards compressor/limiter, (D) an upwards expander, (S) an upwards compressor, and (R) a downwards expander/gate?

No such thing as digressing in my case. x D

Don't forget that I'm mostly using IEMs now. Things are super isolated with them, at least - although I know, blood circulation noises are still there. At least with silicone tips. I'm gonna give this Comply Foam a shot, soon.

My aim is to maximize dynamic range (during work, at least), so considering both ends.

I'm kind of at a similar place to what you describe, so not only the area is mostly very quiet, but also the rooms. I didn't measure them though. And since you may remember my issues with where to record vocals : ) they aren't the most ideal for speakers either - a good thing I prefer professional IEMs anyway. However, if speakers unexpectedly get super loud, we can cover our ears; in case of headphones, we can drop them off; now with professional IEMs, err, that takes some practice, but it can take a bit too long to take them out either way. So I definitely need to figure out a "panic" solution.

Yikes! I'm kind of afraid of using spray foam. At least the last time my brother bought one, I read that at least the fresh stuff is cancerous - if we don't use proper protection equipment ~ which I didn't have, hence I skipped using it.
For me, fluid silicone on the edges of window glass, and sticking foam strips on the openings (I'm horrible at explaining these things) made a huge difference. Although modern windows already have these.

User avatar
selig
RE Developer
Posts: 11685
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

09 Sep 2022

"I kind of understand that, but I don't really get why the volume of patches and samples aren't leveled to a reference sound, so that they are more or less equally loud."

Not sure what sound that could even be - remember, you're talking about sounds from leads to basses, atmospheric to realistic, sounds that may play a single note or a huge chord, sounds that may be played way up high or way down low, sounds that can be very dynamic to very static. Not 'one way' to predict all the factors involved in what the user will end up hearing.

But think of it from a head room perspective. You need to leave headroom not only for polyphony but also for elements in the mix. So take a patch that is just below clipping and lower it by a certain amount, say 12 dB. That's exactly the Factory Sound Bank parameters, add 12 dB headroom to every patch. For bass/leads you check it one note at a time, for poly patches you play 4-5 note chords to set the level and don't sweat it if it jumps around a few dB.
Mind you, not all previous FSB patches were done that way, but the ones I did for 6.5 and Friction were done that way. And many VSTs actually clip when you play a chord (or even a single note in some cases), which just means you MUST lower their level before you can add them to a mix.
I'm also convinced that for some folks, this overall lower level is the proverbial "Reason Sound", since "louder sounds better"…
Selig Audio, LLC

RobC
Posts: 1833
Joined: 10 Mar 2018

10 Sep 2022

selig wrote:
09 Sep 2022
"I kind of understand that, but I don't really get why the volume of patches and samples aren't leveled to a reference sound, so that they are more or less equally loud."

Not sure what sound that could even be - remember, you're talking about sounds from leads to basses, atmospheric to realistic, sounds that may play a single note or a huge chord, sounds that may be played way up high or way down low, sounds that can be very dynamic to very static. Not 'one way' to predict all the factors involved in what the user will end up hearing.

But think of it from a head room perspective. You need to leave headroom not only for polyphony but also for elements in the mix. So take a patch that is just below clipping and lower it by a certain amount, say 12 dB. That's exactly the Factory Sound Bank parameters, add 12 dB headroom to every patch. For bass/leads you check it one note at a time, for poly patches you play 4-5 note chords to set the level and don't sweat it if it jumps around a few dB.
Mind you, not all previous FSB patches were done that way, but the ones I did for 6.5 and Friction were done that way. And many VSTs actually clip when you play a chord (or even a single note in some cases), which just means you MUST lower their level before you can add them to a mix.
I'm also convinced that for some folks, this overall lower level is the proverbial "Reason Sound", since "louder sounds better"…
Great, I accidentally refreshed half way, and my reply is gone.

I'll just add the essential part without my rambling: maybe some RMS or LUFS limiter would be useful, so it temporarily balances samples and patches' loudness out a bit when browsing/listening, so that a loud distorted sound at -12 dBFS doesn't blow out our eardrums.

Also, Wow, do VSTs integrate with Reason that poorly? I'd think, they utilize 64 bit...

I always thought the Reason Sound x D is rather just the familiar samples, instruments, patches, etc. I did recognize if somebody made music with Reason like that.

Similarly, there's a Fruity Loops sound, mostly the resonant "DJ" filters, squashed drums, static sounding synthesizers. Too artificial.
They are curious, but rather bad when overused and exaggerated.

User avatar
selig
RE Developer
Posts: 11685
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

10 Sep 2022

RobC wrote:
10 Sep 2022

Great, I accidentally refreshed half way, and my reply is gone.

I'll just add the essential part without my rambling: maybe some RMS or LUFS limiter would be useful, so it temporarily balances samples and patches' loudness out a bit when browsing/listening, so that a loud distorted sound at -12 dBFS doesn't blow out our eardrums.

Also, Wow, do VSTs integrate with Reason that poorly? I'd think, they utilize 64 bit...

I always thought the Reason Sound x D is rather just the familiar samples, instruments, patches, etc. I did recognize if somebody made music with Reason like that.

Similarly, there's a Fruity Loops sound, mostly the resonant "DJ" filters, squashed drums, static sounding synthesizers. Too artificial.
They are curious, but rather bad when overused and exaggerated.
As for level control on the browser, just imagine once you load the patch how the level would change and you get the idea why not to do it.
As to VST integrating, I only mentioned patch level of some patches nothing about integration or bit depth - maybe a misunderstanding between us somewhere?
As for Reason sound, several comments have been made over the years about the same sample sounding so much weaker in Reason, only to point out NNXT’s 9 dB of added headroom that’s not present on other samplers. So when levels were adjusted, ‘magically’ Reason sounded ‘better’. It is REALLY important to remember that “louder sounds better”, and not everyone realizes how much we can hear a volume difference as a ‘quality’ difference.
Every so often I suggest a ‘trick’ one can play on other engineers, where you have to identical mix files but one of them is lowered by 1 dB. Folks will often ascribe all sorts of qualitative differences to the louder file, such as a wider sound stage, more body, greater presence, etc - everything BUT a level difference (some folks may need 2 dB to ‘hear’ the difference).
I’ve only done this once because I don’t like ‘tricking’ folks, but it worked (we were in a very expensive studio too, so it was easier to ‘hear’ the differences!).
Even as one progresses it is amazing how many times I put on a compressor or EQ and think it’s really improved the sound, only to then level match it with the bypassed version and realize no, it’s not any better at all (just slightly different).
THAT is why you keep levels consistent, do constant A/B checks, and are aware of all the ways one can be ‘fooled’ with audio! ;)
Selig Audio, LLC

RobC
Posts: 1833
Joined: 10 Mar 2018

12 Sep 2022

selig wrote:
10 Sep 2022
As for level control on the browser, just imagine once you load the patch how the level would change and you get the idea why not to do it.
As to VST integrating, I only mentioned patch level of some patches nothing about integration or bit depth - maybe a misunderstanding between us somewhere?
As for Reason sound, several comments have been made over the years about the same sample sounding so much weaker in Reason, only to point out NNXT’s 9 dB of added headroom that’s not present on other samplers. So when levels were adjusted, ‘magically’ Reason sounded ‘better’. It is REALLY important to remember that “louder sounds better”, and not everyone realizes how much we can hear a volume difference as a ‘quality’ difference.
Every so often I suggest a ‘trick’ one can play on other engineers, where you have to identical mix files but one of them is lowered by 1 dB. Folks will often ascribe all sorts of qualitative differences to the louder file, such as a wider sound stage, more body, greater presence, etc - everything BUT a level difference (some folks may need 2 dB to ‘hear’ the difference).
I’ve only done this once because I don’t like ‘tricking’ folks, but it worked (we were in a very expensive studio too, so it was easier to ‘hear’ the differences!).
Even as one progresses it is amazing how many times I put on a compressor or EQ and think it’s really improved the sound, only to then level match it with the bypassed version and realize no, it’s not any better at all (just slightly different).
THAT is why you keep levels consistent, do constant A/B checks, and are aware of all the ways one can be ‘fooled’ with audio! ;)
The same level could be optionally applied to the loaded patch/sample itself. I think, RS could figure something out.

I misunderstood at first what you meant about VSTs, but it's clear now.

It depends... if we listen to some pumping dance song, and turn the volume down, the pumping effect appears stronger, and almost sounds gated. But I get what you mean, and explains why they say that FL has a master limiter on by default. It might be a good marketing strategy to make things louder, but starts rather bad trends, such as the loudness war.

Lol, that trick, or rather the reactions sound familiar. Although rather when people (mostly audiophiles) make comparisons between IEMs, Headphones, and Speakers, then say all kinds of things that they claim hear.

The compressor effect is a thing. Maybe some gentle automatic gain compensation (during setting it up) could be useful. Sometimes, I feel I should push it harder and just do parallel processing.

User avatar
integerpoet
Posts: 832
Joined: 30 Dec 2020
Location: East Bay, California
Contact:

12 Sep 2022

You should definitely have (at least) two audio interfaces (AKA "sound cards").
  • a "good" one for music-making and the like
  • another one for system beeps and such
And of course you should have an operating system good enough to let you control where system beeps and such go.

I can't imagine working without such a setup. Or rather I can imagine being pissed off all the time. :puf_smile:

RobC
Posts: 1833
Joined: 10 Mar 2018

13 Sep 2022

integerpoet wrote:
12 Sep 2022
You should definitely have (at least) two audio interfaces (AKA "sound cards").
  • a "good" one for music-making and the like
  • another one for system beeps and such
And of course you should have an operating system good enough to let you control where system beeps and such go.

I can't imagine working without such a setup. Or rather I can imagine being pissed off all the time. :puf_smile:
Luckily, ASIO4ALL takes full control of the sound, so Windows system sounds normally won't be played.
I might be better off with an Apple PC someday, when Reason can run natively on the M series.

Agreed, I should run the system sounds to a small speaker, maybe, from the motherboard's integrated soundcard.

Yonatan
Posts: 1556
Joined: 18 Jan 2015

13 Sep 2022

Just wanna say I love these kind of threads. 👍

RobC
Posts: 1833
Joined: 10 Mar 2018

13 Sep 2022

Yonatan wrote:
13 Sep 2022
Just wanna say I love these kind of threads. 👍
Glad to hear!

I'm always in search of new possibilities.

User avatar
integerpoet
Posts: 832
Joined: 30 Dec 2020
Location: East Bay, California
Contact:

13 Sep 2022

selig wrote:
10 Sep 2022
Even as one progresses it is amazing how many times I put on a compressor or EQ and think it’s really improved the sound, only to then level match it with the bypassed version and realize no, it’s not any better at all (just slightly different).
I still turn a knob on the wrong channel, often one with no signal at that moment, and think "ah, that's bet… wait a minute!"

Even the most assiduous level-matching is powerless against that degree of stupidity. :puf_smile:

RobC
Posts: 1833
Joined: 10 Mar 2018

14 Sep 2022

integerpoet wrote:
13 Sep 2022
selig wrote:
10 Sep 2022
Even as one progresses it is amazing how many times I put on a compressor or EQ and think it’s really improved the sound, only to then level match it with the bypassed version and realize no, it’s not any better at all (just slightly different).
I still turn a knob on the wrong channel, often one with no signal at that moment, and think "ah, that's bet… wait a minute!"

Even the most assiduous level-matching is powerless against that degree of stupidity. :puf_smile:
If I feel like I need to change something, such as a mix channel, I restart from zero, or I turn it up, then dial back to the perfect spot. Sometimes I double or even triple check. If I land 100% on the same value, then it's perfect. (Yes, perfection is subjective. : P But you know what I mean by perfect, or accurate, or sweet spot, etc.)

I do this because this way I minimize the risk of my ears adapting to the sound. To kind of quickly "reset" my ears.

Post Reply
  • Information
  • Who is online

    Users browsing this forum: No registered users and 4 guests