Loudness Wars - Logic vs Reason

Have an urge to learn, or a calling to teach? Want to share some useful Youtube videos? Do it here!
avasopht
Competition Winner
Posts: 3948
Joined: 16 Jan 2015

25 Sep 2023

Oh, and maybe you can push the maximizer (I'm assuming you're using the MClass Maximizer) a little more. If not then ignore this.

The Reason maximizer has two key steps - limiter (with lookahead), and then soft clipping. You can apply a boost after the first step. The attack speed of the limiter will also change its responsiveness with peaks.

User avatar
selig
RE Developer
Posts: 11747
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

25 Sep 2023

avasopht wrote:
24 Sep 2023
The condescending tone (with the video) and persistent nitpicking are pretty disappointing, Giles.

I appreciate that we all go through our own personal and private struggles and that we don't know what another person might be going through, but this is all so small, ...
Sincere apologies, I honestly had no idea.
I try to stick to the facts for clarity because there is so much that is not clear online, I had no intention of causing hurt to anyone.
Selig Audio, LLC

RobC
Posts: 1848
Joined: 10 Mar 2018

26 Sep 2023

selig wrote:
25 Sep 2023
avasopht wrote:
24 Sep 2023
The condescending tone (with the video) and persistent nitpicking are pretty disappointing, Giles.

I appreciate that we all go through our own personal and private struggles and that we don't know what another person might be going through, but this is all so small, ...
Sincere apologies, I honestly had no idea.
I try to stick to the facts for clarity because there is so much that is not clear online, I had no intention of causing hurt to anyone.
Lol, how can people be hurt? We're here to learn, not to chill. When it comes to learning, a bit of butt-hurt frustration only makes people pay more attention, and think about what was said, more. Besides, if we get frustrated when learning, it means we still suck at something.

For fairies and rainbow muffins, we can go to The Kitchen. : D

avasopht
Competition Winner
Posts: 3948
Joined: 16 Jan 2015

26 Sep 2023

RobC wrote:
26 Sep 2023
Lol, how can people be hurt? We're here to learn, not to chill. When it comes to learning, a bit of butt-hurt frustration only makes people pay more attention, and think about what was said, more. Besides, if we get frustrated when learning, it means we still suck at something.

For fairies and rainbow muffins, we can go to The Kitchen. : D
I wasn't saying I was hurt, but that it looked a lot more to me like persistent nitpicking (with a clearly condescending "gotcha" tone in some places) than a normal well-intentioned interaction. He says, "I'm just stating facts", but they were not said without clear and unequivocal implications (hence some of the "gotcha" tones).



Unfortunately, I tend to find that when someone is hell-bent on misinterpreting you and nitpicking, that the more you explain, the more they have to nitpick.



It seems to be an unconscious thing where someone has a preconception about the other, so rather than seeking to understand what they're actually saying, they seek faults to "correct" the person and concede a win.

---

For example, "Check out this comparison of summing or playing audio in Reason compared to Logic. Spoiler - they are exactly the same."

Now, I already said they had the exact same levels, so that didn't come across well to me.

My only point with the VU Offset in Reason was: "Maybe he was looking at the metering in Reason and was looking at what 0dB was on the meters in Reason that are affected by the VU Offset".

That was ALL I was getting at, and I highly doubt Selig didn't get that. So everything else was most certainly nitpicking. There is no way he didn't understand what I was getting at.



Now, I am fully at fault for being unclear. I was severely sleep-deprived when writing this (and the stuff about the Mackie is from 16 years ago, so there are some minor details I might have mixed up). When people are tired, they can mix things up. I even mixed up stuff about my code (below) because that can happen when tired.

But the point I was actually getting at (which is exactly what was written in the manual) is pretty obvious (and why the VU offset exists in the first place). Selig seemed to misinterpret even when clearly clarified, and it's obvious what I was actually getting at.



And there's absolutely no way anyone would think that changing the VU Offset would change the output volume because it doesn't change the output volume.

His response was intended to assert the idea I held a bunch of extreme beliefs that just don't make sense. It wasn't to clarify IMO.




Now, for clarity, the video below has a Rack Extension I developed (but did not release). It would be impossible for me to write a novel DSP algorithm that works if I didn't understand what 0dBS was:



What my Rack Extension does

Note: the RE isn't a distortion effect. It's actually an RMS relative compressor (for targeting peaks to lower LUFS). A threshold of, say, +6 dB, would start to compress when the signal is 6 dB higher than than RMS. The "aggression" combinator knob lowers the threshold, increases the compression ratio, and might alter the RMS window. In this video, it does it all to the extremes.




We all have our own barometers for intent. The fact that he keeps insisting I don't know 0dBFS, and is making out that he needed to prove to me that they are the same after I already said they are, moves the meters on my barometer.

This is the code from the tape delay (used in the above Rack Extension for lookahead that can be smoothly changed):

Code: Select all

#include "tape_delay.h"
// TapeDelay methods as well as three DelayReader classes are defined
// in this file.

#include "delay_reader.h"
#include "rc_circuit.h"
#include "circular_buffer.h"

#include <cassert>
#include <cmath>

class SlowReader : public DelayReader {
public:
  SlowReader() : delay_(0), buffer_(0), filter_() {
    filter_.FilterResponse(2);
  }

  virtual float Process() {
    const float kError = 0.1f;

    size_t index = static_cast<size_t>(delay_ + kError);
    float sample = (*buffer_)[index];

    // Increase delay so that reader is reading at 3/4 speed.
    delay_ += 0.25f;

    return filter_.Process(sample);
  }

  virtual float delay() const { return floor(delay_); }

  virtual void SetBuffer(FloatRingBuffer &buffer) { buffer_ = &buffer; }

  virtual void SetDelay(float delay) {
    assert(delay >= 0);
    assert(buffer_);

    delay_ = delay;

    // Read previous sample so that first call to Process produces a smooth
    // signal.
    size_t previous_index = static_cast<size_t>(delay_ + 1.f);
    filter_.SetCharge((*buffer_)[previous_index]);
  }

private:
  float delay_;
  FloatRingBuffer *buffer_;
  RcCircuit filter_;

};

class FastReader : public DelayReader {
public:

  FastReader() : delay_(0), buffer_(0) {
    // Filter at the Nyquist frequency.
    filter_.LowPassPercent(3.f / 4.f);
  }

  virtual void SetBuffer(FloatRingBuffer &buffer) { buffer_ = &buffer; }

  virtual void SetDelay(float delay) {
    assert(delay >= 0);
    assert(buffer_);

    delay_ = delay;

    // Charge filter to equal latest sample in buffer.
    size_t index = static_cast<size_t>(delay);
    filter_.SetCharge((*buffer_)[index]);
  }

  virtual float Process() {
    // Error adjustment in case of floating point precision drift.
    const float kError = 0.1f;

    // Get buffer position then move read head.

    size_t first_index = static_cast<size_t>(delay_ + kError);
    delay_ -= (4.f / 3.f) - 1.f;
    size_t second_index = static_cast<size_t>(delay_ + kError);

    float first_sample = (*buffer_)[first_index];
    float second_sample = (*buffer_)[second_index];

    // 
    
    filter_.Process(first_sample);
    return filter_.Process(second_sample);
  }

  virtual float delay() const { return floor(delay_); }
private:
  RcCircuit filter_;
  float delay_;
  FloatRingBuffer *buffer_;

};

class NormalReader : public DelayReader {
public:
  NormalReader() : delay_(0), buffer_(0) {}

  virtual void SetBuffer(FloatRingBuffer &buffer) { buffer_ = &buffer; }

  virtual void SetDelay(float delay) { delay_ = delay; }

  virtual float Process() {
    const float kError = 0.1f;

    size_t index = static_cast<size_t>(delay_ + kError);
    return (*buffer_)[index];
  }

  virtual float delay() const { return delay_; }
private:
  float delay_;
  FloatRingBuffer *buffer_;

};

//////////////////////////////////////////////////////////////////////////


TapeDelay::TapeDelay() :
    delay_setting_(0.f),
    current_reader_(0),
    fast_reader_(new FastReader),
    normal_reader_(new NormalReader),
    slow_reader_(new SlowReader),
    buffer_(0){
  current_reader_ = normal_reader_;
}

TapeDelay::TapeDelay(size_t buffer_size) :
delay_setting_(0.f),
current_reader_(0),
fast_reader_(new FastReader),
normal_reader_(new NormalReader),
slow_reader_(new SlowReader),
buffer_(0)
{
    current_reader_ = normal_reader_;
    buffer_ = new FloatRingBuffer(buffer_size);
    SetBuffer(*buffer_);
}

void TapeDelay::SetBuffer(FloatRingBuffer &buffer) {
  fast_reader_->SetBuffer(buffer);
  normal_reader_->SetBuffer(buffer);
  slow_reader_->SetBuffer(buffer);
}

void TapeDelay::SetDelay(float delay_samples) {
  delay_setting_ = delay_samples;
}

float TapeDelay::Process() {

  // Calculate ideal read speed.
  // ... Calculate required speed.
  DelayReader *ideal_reader = GetIdealReader();

  // ... Activate read state on speed change.
  if(ideal_reader != current_reader_) {
    float last_delay = floor(current_reader_->delay());

    current_reader_ = ideal_reader;
    current_reader_->SetDelay(floor(last_delay));
  }

  return current_reader_->Process();
}

float TapeDelay::read_position() const {
  assert(current_reader_);
  return current_reader_->delay();
}

DelayReader * TapeDelay::GetIdealReader() {
  const float current_delay = current_reader_->delay();

  // Read FAST if current delay is higher than setting.
  if(current_delay > delay_setting_) {
    return fast_reader_;
  }

  // Read slowly if current delay is lower than setting.
  if(current_delay < delay_setting_) {
    return slow_reader_;
  }

  // Read NORMALLY if current delay is equal to setting.
  assert(current_delay == delay_setting_);
  return normal_reader_;
}

void TapeDelay::SetDelayImmediately(float delay_samples) {
  delay_setting_ = delay_samples;

  current_reader_ = normal_reader_;
  current_reader_->SetDelay(delay_samples);
}

float TapeDelay::current_delay() const {
  return current_reader_->delay();
}

void TapeDelay::ProcessBatch(const float *in, float *out, size_t size)
{
    if (!buffer_) return;

    for (size_t i = 0; i < size; ++i) {
        buffer_->PushFront(in[i]);
        out[i] = Process();
    }
}
One of the reasons I didn't release it was that I wasn't happy with my tape delay using linear interpolation and nearest neighbour, which is a naive way to pitch shift that causes aliasing. I also wanted to add some more control to the compressor behaviour (which was a much deeper dive into what characteristics both matter and are nice to change).

I do have a chronic issue with building things and not releasing them. It's perfectionism to a fault, and is why most of the music I release was usually made many years before!

User avatar
Loque
Moderator
Posts: 11188
Joined: 28 Dec 2015

27 Sep 2023

Please focus on topic. Keep discussion objective. Avoid personal discussions and do not talk about ppl, please. Stay calm and nice.

Just a mod note.
Reason12, Win10

User avatar
motuscott
Posts: 3446
Joined: 16 Jan 2015
Location: Contest Weiner

27 Sep 2023

It seems to me that reason and logic are inadequate to combat the problems we face today.
Bring on the Bat Shit Insanity.

My mistake, it's already here
Who’s using the royal plural now baby? 🧂

Troublemecca
Posts: 151
Joined: 04 Jun 2018

29 Sep 2023

robussc wrote:
22 Sep 2023
Troublemecca wrote:
21 Sep 2023
This… in Logic I use one of its various metering tools to get the lufs to about -15 for the streaming sites (and peak dbfs at -1)… is there a way to do this in Reason 11? I have the mixing rack extension package that propllerhead sold w it years ago.
I use the Youlean Loudness Meter VST as the final insert on the master buss for measuring the mix output level.
YESSSS, ty
selig wrote:
22 Sep 2023
Troublemecca wrote:
21 Sep 2023


This… in Logic I use one of its various metering tools to get the lufs to about -15 for the streaming sites (and peak dbfs at -1)… is there a way to do this in Reason 11? I have the mixing rack extension package that propllerhead sold w it years ago.
Let me stop you right there and ask an obvious question: are you using mix references when mixing? For example, what LUFS are the mixes in a similar genre as yours? If you don’t already know this, I would suggest it is an excellent place to start, and I think you’ll find that no one (unless you’re a acoustic jazz/folk or classical musician) is releasing material that low.
Think you've got me there Selig... I've aimed for -15 on the LUFS based off the streaming services' recommendations... I started doing this, when I observed soundcloud (and Spotify) absolutely crush what were otherwise clean tracks; while sticking to their LUFS recommendations i've avoided that... but I concede that my tracks sound low compared with others in the space!

Reference tracks always depressed me, so I never used them :lol: thank you for the input, I will humble myself and give it a try.

User avatar
Aosta
Posts: 1059
Joined: 26 Jun 2017

29 Sep 2023

There is a real simple solution, why not just make them 1 louder?

Image
Tend the flame

User avatar
integerpoet
Posts: 832
Joined: 30 Dec 2020
Location: East Bay, California
Contact:

29 Sep 2023

Troublemecca wrote:
29 Sep 2023
Reference tracks always depressed me, so I never used them :lol:
They solve a great many problems, not all of which boil down to filthy commerce or giving in to peer pressure. :-)

For example, I learned from reference tracks that there's often a lot of empty space in the higher frequencies you don't have to consume just because it's available. Once I saw this in a spectrum analyzer, I stopped assuming I needed to fill that space, and then I realized: hey, I don't miss that. It doesn't sound dead; it sounds normal. There's still some stuff up there, but it's OK if not everything "contributes" up there in some obvious way.

Maybe I'm "wasting space", but who thinks like that? (Software engineers. That's who. Guilty as charged.)

And this avoids my having to discover that my mix sounds terrible on other speakers and not being sure how to roll off enough shrill without rolling off too much because I'm away from the tools.

Plus you kinda "get back" some edge when you apply tape saturation during mastering, which I pretty much always do because I am a Very Old and it presses my buttons.

All of this seems kind of dumb in retrospect, but I have never shied away from making a fool of myself in public to benefit others. :-)

robussc
Posts: 493
Joined: 03 May 2022

30 Sep 2023

integerpoet wrote:
29 Sep 2023
For example, I learned from reference tracks that there's often a lot of empty space in the higher frequencies you don't have to consume just because it's available. Once I saw this in a spectrum analyzer, I stopped assuming I needed to fill that space, and then I realized: hey, I don't miss that. It doesn't sound dead; it sounds normal. There's still some stuff up there, but it's OK if not everything "contributes" up there in some obvious way.
Yeah, that’s something I’m still getting wrong. My mixes have way too much high end when I compare to a reference. But it does feel wrong to reduce it :)
Software: Reason 12 + Objekt, Vintage Vault 4, V-Collection 9 + Pigments, Vintage Verb + Supermassive
Hardware: M1 Mac mini + dual monitors, Launchkey 61, Scarlett 18i20, Rokit 6 monitors, AT4040 mic, DT-990 Pro phones

User avatar
integerpoet
Posts: 832
Joined: 30 Dec 2020
Location: East Bay, California
Contact:

02 Oct 2023

robussc wrote:
30 Sep 2023
integerpoet wrote:
29 Sep 2023
For example, I learned from reference tracks that there's often a lot of empty space in the higher frequencies you don't have to consume just because it's available. Once I saw this in a spectrum analyzer, I stopped assuming I needed to fill that space, and then I realized: hey, I don't miss that. It doesn't sound dead; it sounds normal. There's still some stuff up there, but it's OK if not everything "contributes" up there in some obvious way.
Yeah, that’s something I’m still getting wrong. My mixes have way too much high end when I compare to a reference. But it does feel wrong to reduce it :)
It took me a long time to understand and accept that on headphones I am tempted to include more high frequency content than on speakers. Some folks will want to mention phase cancellation here, but this was the opposite of "hey where'd all my high frequencies go?" (Plus I'm a fanatical devotee of the mono switch.)

My working hypothesis is that it has something do with — at least for modern-ish stereo mixes — the side channel tending to carry more of the high frequencies. On headphones, precisely half of the side channel signal hits each ear, but on speakers each ear gets more than half. For bass, though, I'm tending to mix toward the center. I'm sure somebody here with more of a clue than me knows the real story.

Another lesson it took me too long to learn is not to worry that the more one closes a synth filter the greater a disservice one is doing to math. That's just silly; math can't hear! And, besides, isn't the filter just slightly different math? Or maybe try shaving off the gratuitously shrill spikes of synth math in a pleasing way by running it though a bass amp sim, which can seem like an act of destruction but actually adds a lot more math!

One often reads that if something sounds good then it is good. But it's also true that if something sounds bad then it is bad even if — intellectually — fixing it seems like throwing away or corrupting perfectly good data.

User avatar
selig
RE Developer
Posts: 11747
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

03 Oct 2023

robussc wrote:
30 Sep 2023
integerpoet wrote:
29 Sep 2023
For example, I learned from reference tracks that there's often a lot of empty space in the higher frequencies you don't have to consume just because it's available. Once I saw this in a spectrum analyzer, I stopped assuming I needed to fill that space, and then I realized: hey, I don't miss that. It doesn't sound dead; it sounds normal. There's still some stuff up there, but it's OK if not everything "contributes" up there in some obvious way.
Yeah, that’s something I’m still getting wrong. My mixes have way too much high end when I compare to a reference. But it does feel wrong to reduce it :)
As for wasted space, were you LOOKING at the ref track and not LISTENING to it!?! ;)

As for overly bright (or dark in my case) mixes, the "go to the source" solution for me has always been to adjust the tweeter level on the speakers I mix on. If you mix bright on your system and don't want to have to learn to 'prefer' a darker sound, make your system brighter so you don't feel the need to add as much additional brightness. Adjust your system to sound good to YOU on reference material you know sounds good on most systems (again, sounds good to YOU), and the rest should take care of itself.

Bottom line, if you are mixing too hot, turn up your monitors. If you are mixing too bass heavy, add some sub level (assuming you're already using a full range system). And if you're mixing too bright, make your playback system brighter.
That way you can just keep mixing how you want to hear it, and not worry about it translating to other systems.

Historical reference: Some tracking rooms in the 60-70s had a 'tracking EQ' on the main monitors, which was darker than normal. The result is when tracking, you made the sounds a little brighter than you thought they were. And the result of that was that after "time" and lots of playbacks, the high frequency loss that normally happens wouldn't lead to a dull mix. And even if you mixed right away, you could actually turn DOWN some high frequency energy on some tracks which resulted in less noise. Win/win!

All of these techniques work because we assume what we hear is 'flat' and adjust/mix accordingly, so why not take advantage of that. ;)
Selig Audio, LLC

avasopht
Competition Winner
Posts: 3948
Joined: 16 Jan 2015

03 Oct 2023

Are there any genre-specific considerations or techniques for improving perceived loudness?

I've seen Timbaland and Deadmau5 mention shortening the length of heavy-hitting kicks, which is apparently a common practice in (IIRC) dance music.

I seem to be able to get decent perceived loudness without turning it to mush from time to time, but mostly I just go through my process.

One top mixing engineer showed me step by step how his chains of effects stack up in a mix, and there are so many little things he does that make sense but seem counterintuitive to naturally arrive at.

User avatar
integerpoet
Posts: 832
Joined: 30 Dec 2020
Location: East Bay, California
Contact:

03 Oct 2023

selig wrote:
03 Oct 2023
As for wasted space, were you LOOKING at the ref track and not LISTENING to it!?! ;)
You were replying to someone else, but yeah, years ago when I first saw a spectrum analyzer -- and before I started trying to make stuff sound good myself -- the visual aspects of the experience evidently made a strong impression. Without instruction or perspective, it's easy to jump to the conclusion that every point on the center line is at least somewhat relevant to every sound. I get the impression from your other posts that you may have been immune to this because you already had experience with the feedback loop between knob and ear without any meddlesome interference from eyes. I think this is especially insidious with soft synths and to a lesser extent canned samples because you never hear an actual instrument vibrating actual air molecules as it was recorded.
As for overly bright (or dark in my case) mixes, the "go to the source" solution for me has always been to adjust the tweeter level on the speakers I mix on. If you mix bright on your system and don't want to have to learn to 'prefer' a darker sound, make your system brighter so you don't feel the need to add as much additional brightness.
That seems like a great idea and makes me wonder if whether my answer is a more sophisticated signal path from the computer to my headphones. The path I have now goes directly from Thunderbolt to amplified analog (headphone) signal. Maybe a discrete DAC through a discrete EQ to a discrete amp? Because that'll be cheap. :-)

I should probably also revisit the RE which purports to simulate speakers on headphones. I bought it but ended up not making it a part of my regular workflow and I can't remember why. It'll be funny if I listen now with better perspective and realize I must have thought it was making things too bright.

And yes of course actual monitors would be better. But it would drive my wife insane if that were my standard workflow. :-)

Harpuia
Posts: 22
Joined: 10 Mar 2021

03 Oct 2023

avasopht wrote:
03 Oct 2023
Are there any genre-specific considerations or techniques for improving perceived loudness?

I've seen Timbaland and Deadmau5 mention shortening the length of heavy-hitting kicks, which is apparently a common practice in (IIRC) dance music.

I seem to be able to get decent perceived loudness without turning it to mush from time to time, but mostly I just go through my process.

One top mixing engineer showed me step by step how his chains of effects stack up in a mix, and there are so many little things he does that make sense but seem counterintuitive to naturally arrive at.
For really nasty genres (ie dubstep/riddim/trench/tearout), clipping the master can be a very good way of increasing the perceived loudness. I generally run a hard clipper like freeclip towards the end of my mastering chain, and if the track calls for it, I'll bump the volume going into the clipper to saturate the master ever so slightly. It's a dumb way to do it, but it can definitely work in very specific contexts.

User avatar
selig
RE Developer
Posts: 11747
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

04 Oct 2023

Harpuia wrote:
03 Oct 2023
avasopht wrote:
03 Oct 2023
Are there any genre-specific considerations or techniques for improving perceived loudness?

I've seen Timbaland and Deadmau5 mention shortening the length of heavy-hitting kicks, which is apparently a common practice in (IIRC) dance music.

I seem to be able to get decent perceived loudness without turning it to mush from time to time, but mostly I just go through my process.

One top mixing engineer showed me step by step how his chains of effects stack up in a mix, and there are so many little things he does that make sense but seem counterintuitive to naturally arrive at.
For really nasty genres (ie dubstep/riddim/trench/tearout), clipping the master can be a very good way of increasing the perceived loudness. I generally run a hard clipper like freeclip towards the end of my mastering chain, and if the track calls for it, I'll bump the volume going into the clipper to saturate the master ever so slightly. It's a dumb way to do it, but it can definitely work in very specific contexts.
I've recently been getting back into using clipping as a part of my overall production process. One thing that works for me is clipping at the track level (the last plugin on any one track), setting the clippers to -12 dBFS across all tracks and then just pushing things a little harder into the clipper (turn up the pre-clipper level!) if I want a louder mix. Why that level? Because it's where the tracks need to sit such that my masters peak between -3 and -6 dBFS on average. So I know that my individual clippers won't let the mix level increase even if I turn up levels an any point except the fader. I have less success with clipping on the master, probably because the track is already so clipped (more dense crest factor) that it just makes things harsher very quickly. It's like trying to clip a square wave - it's already clipped, so you can't really get it to be MORE clipped.

I would go so far as to say some form of controlled intentional clipping at some point in the signal path is all but a requirement for a loud mix. The most obvious way to increase loudness is to increase the average level without increasing the peaks, and nothing does that like a clipper! Some sounds, particularly non pitched percussive sounds with strong transients (high crest factor) can withstand 3-5 ms or more of hard clipping before the human ear will register it as "distortion" or clipping. Other sounds, such as any sine type or natural acoustic sound can 'reveal' clipping much sooner. That's one big reason I end up using clipping on individual channels, and never on vocals (we have Tube Screamers for that!).

As for counterintuitive, that was a big revelation for me as well. Turns out more is not always more, especially in audio. It took me years to realize that sometimes cutting low frequencies (with a low shelf) could make the vocal brighter in a more subtle way than boosting top end (with a high shelf). I would never have stood a chance at learning audio engineering on my own, I owe everything to the mentors who I assisted and who graciously shared their wisdom with me. Most of the time, what they were doing seemed totally counterintuitive, but I just kept copying what I saw them doing and eventually it began to stick.
Selig Audio, LLC

robussc
Posts: 493
Joined: 03 May 2022

04 Oct 2023

selig wrote:
03 Oct 2023

As for overly bright (or dark in my case) mixes, the "go to the source" solution for me has always been to adjust the tweeter level on the speakers I mix on. If you mix bright on your system and don't want to have to learn to 'prefer' a darker sound, make your system brighter so you don't feel the need to add as much additional brightness. Adjust your system to sound good to YOU on reference material you know sounds good on most systems (again, sounds good to YOU), and the rest should take care of itself.

Bottom line, if you are mixing too hot, turn up your monitors. If you are mixing too bass heavy, add some sub level (assuming you're already using a full range system). And if you're mixing too bright, make your playback system brighter.
That way you can just keep mixing how you want to hear it, and not worry about it translating to other systems.

Historical reference: Some tracking rooms in the 60-70s had a 'tracking EQ' on the main monitors, which was darker than normal. The result is when tracking, you made the sounds a little brighter than you thought they were. And the result of that was that after "time" and lots of playbacks, the high frequency loss that normally happens wouldn't lead to a dull mix. And even if you mixed right away, you could actually turn DOWN some high frequency energy on some tracks which resulted in less noise. Win/win!

All of these techniques work because we assume what we hear is 'flat' and adjust/mix accordingly, so why not take advantage of that. ;)
That's all great advice, thanks. I'm definitely considering adding a sub, because that's also something that is only showing up during in-car playback (which has a sub). And tweaking the tweeter is also great. And yes, I'm not being as diligent with the reference as I should!
Software: Reason 12 + Objekt, Vintage Vault 4, V-Collection 9 + Pigments, Vintage Verb + Supermassive
Hardware: M1 Mac mini + dual monitors, Launchkey 61, Scarlett 18i20, Rokit 6 monitors, AT4040 mic, DT-990 Pro phones

avasopht
Competition Winner
Posts: 3948
Joined: 16 Jan 2015

04 Oct 2023

selig wrote:
04 Oct 2023
I would go so far as to say some form of controlled intentional clipping at some point in the signal path is all but a requirement for a loud mix. ...

... non pitched percussive sounds with strong transients (high crest factor) can withstand 3-5 ms or more of hard clipping before the human ear will register it as "distortion" or clipping ...
That's useful info.

avasopht
Competition Winner
Posts: 3948
Joined: 16 Jan 2015

18 Oct 2023

@selig: what are your thoughts on this video?

I recall you giving similar perspective on the past


thomaseh
Posts: 5
Joined: 20 Oct 2023

20 Oct 2023

avasopht wrote:
22 Sep 2023

I usually put an MClass Maximizer before the Hardware Interface to monitor the final peak dBFS levels. I sometimes even wrap it up into a combinator with a button to switch between the two.
That is the way I did it too. I am allways amazed how fast you can master something really well in Reason.

Isn't that the default nowadays?

Never outputted to other device though.


Btw...:
The Title to the Thread is somehow not accurate at all.
Peak, dB Fs and Loudness War are all completely different topics. But related.

thomaseh
Posts: 5
Joined: 20 Oct 2023

20 Oct 2023

integerpoet wrote:
24 Sep 2023
selig wrote:
23 Sep 2023
0 dBFS is the same on every DAW!
This is definitionally true.

Except when a DAW has a bug. :-)
Some Interfaces Clip on the analogue step before A-D Conversion like Yamaha. So there you often can't get to 0dB FS on Inputs. Some critic this, but for me it is just the correct and perfect solution for recording and live mixing. Where an analogue short clip(ing) is not recogniseable but a Digital is extremely anoying.

Post Reply
  • Information
  • Who is online

    Users browsing this forum: No registered users and 44 guests