Getting an MFIT approval from Apple? - Reason Mastering Engineers

This forum is for discussing Reason. Questions, answers, ideas, and opinions... all apply.
Post Reply
RobC
Posts: 1832
Joined: 10 Mar 2018

20 Mar 2018

No matter what, Reason is my go-to DAW when it comes to audio.
As far as I know, you need an Apple computer to even run their pretty basic diagnostic programs. (I know Reason supports both Mac and Windows computers.)

If I remember correctly, they ask for -16 LUFS with -1 dB headroom, no clipping, etc. Pretty simple. (Though Reason could use a file analysis tool.)
(I have the feeling, even that is too loud if you master for the human hearing - but if they say that's the shiz, then so be it. - My philosophy is that mastering is about making the music sound tonally as good as possible, not louder...)

I presume there's no way to avoid buying an Apple computer? Kind of pricey... (Says the guy currently writing from a Dell laptop. xD)

Best would be if they'd test you by giving an audio file, master it, and then decide if they are satisfied. Kind of sad when I saw an MFIT album which was still trashed for no reason.

I'd need an MFIT certificate to be able to publish songs to Itunes MFIT category without the unnecessary need of hiring another mastering engineer to just run some diagnostics.

So, any way around an Apple computer for that?

User avatar
Data_Shrine
Posts: 517
Joined: 23 Jan 2015

20 Mar 2018

I have no idea... but I have been approved for MFiT releases. It's a quality standard and you can use the tools to make changes to the master because it lets you hear how it sounds in the iTunes store format. So you make a master for the compressed format.

They can still decide not to give you MFiT label even if it fits the bill. It's complicated to upload via my distributor, and well I don't bother with it anymore.

RobC
Posts: 1832
Joined: 10 Mar 2018

20 Mar 2018

Honestly, in my experience, people still try to skirt loudness war to the quality standards.

Does that mean, if they get a professional master (which sounds good for a change, haha) that fits their specifications, too, they can still reject it? That would suck. I read people who struggle with equalization badly get certified. I wonder how it works...

There are indie labels that take care of Itunes releases like Reverbnation for a fee if I remember correctly.

Tarekith
Posts: 4
Joined: 21 Mar 2018

21 Mar 2018

I'm a Apple certified MFiT mastering engineer, so maybe I can help clear some of this up.

There is no LUFS recommendation for the MFiT standard, though they do use that (along with -1dBTP) for their Soundcheck. Weird I know, but the only requirement for MFiT releases is that they don't clip when checked with the AURoundTripAAC plug in or their command line tool, and the files have to be 24bit or higher. Apple would prefer that the files also be a higher sample rate than 44.1kHz, but they will accept 44.1 if the mastering engineer wants to do the SR conversion themselves. The only time (once) I've heard of Apple rejecting an MFiT master was if it was clearly a 44.1k file that had just been upsampled to say 96k. There may be tools other than Apple's for testing the clipping aspect, I honestly don't know if there's something for Windows available.

You don't need to be certified to submit an MFiT release. The MFiT certification is just Apple's way of checking that certain mastering engineers understand the process in case they need to refer an artist to them. However, you DO need to find an aggregator that supports MFiT releases of which there are very few. I know CDbaby will do it if you contact them, and I've heard reverbnation would as well though I have no experience there. Even then, you typically have to submit the MFiT version as a seperate album for JUST iTunes, and usually they charge you again.

Hope that helps let me know if you have any questions though.

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

21 Mar 2018

You don't need to do anything. Apple basically just informs you that the audio WILL be adapted to that LUFS range. So you can either send it in that way or Apples algorithm will listen to the Song and set the volume accordingly.

It IS true - more than ever - that you should ONLY make your music sound good in mastering, nothing else. Trying to make stuff loud WILL make it sound bad these days as the content deliverers (Radio, iTunes, Spotify, Youtube etc.) use LUFS metering and set the volume for each track.

User avatar
selig
RE Developer
Posts: 11685
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

21 Mar 2018

Tarekith wrote:
21 Mar 2018
I'm a Apple certified MFiT mastering engineer, so maybe I can help clear some of this up.

There is no LUFS recommendation for the MFiT standard, though they do use that (along with -1dBTP) for their Soundcheck. Weird I know, but the only requirement for MFiT releases is that they don't clip when checked with the AURoundTripAAC plug in or their command line tool, and the files have to be 24bit or higher. Apple would prefer that the files also be a higher sample rate than 44.1kHz, but they will accept 44.1 if the mastering engineer wants to do the SR conversion themselves. The only time (once) I've heard of Apple rejecting an MFiT master was if it was clearly a 44.1k file that had just been upsampled to say 96k. There may be tools other than Apple's for testing the clipping aspect, I honestly don't know if there's something for Windows available.

You don't need to be certified to submit an MFiT release. The MFiT certification is just Apple's way of checking that certain mastering engineers understand the process in case they need to refer an artist to them. However, you DO need to find an aggregator that supports MFiT releases of which there are very few. I know CDbaby will do it if you contact them, and I've heard reverbnation would as well though I have no experience there. Even then, you typically have to submit the MFiT version as a seperate album for JUST iTunes, and usually they charge you again.

Hope that helps let me know if you have any questions though.
Thanks for the clear info!

Just curious, how much of your work is delivered as MFiT files: everything (in addition to .wav), or just by request?

I personally think folks get too hung up on the LUFS measurement for music mixes - just make the mix sound "good" (whatever that means to each individual) because the playback on most streaming sites will adjust it's final level (but not it's dynamics) anyway! The most I do is to check a mix next to another (either a commercial mix or one of my own) for reference IF I have any question about what I'm doing to the dynamics!

What are your thoughts on these subjects, and the idea of "self mastering" in general?
My approach has always been to use a reputable ME (have used Bob Olhsson for years ever since he moved to Nashville) whenever possible, for anything that will be released on a large scale.

Welcome to Reason Talk, btw!
Selig Audio, LLC

Tarekith
Posts: 4
Joined: 21 Mar 2018

21 Mar 2018

selig wrote:
21 Mar 2018

Just curious, how much of your work is delivered as MFiT files: everything (in addition to .wav), or just by request?

I personally think folks get too hung up on the LUFS measurement for music mixes - just make the mix sound "good" (whatever that means to each individual) because the playback on most streaming sites will adjust it's final level (but not it's dynamics) anyway! The most I do is to check a mix next to another (either a commercial mix or one of my own) for reference IF I have any question about what I'm doing to the dynamics!

What are your thoughts on these subjects, and the idea of "self mastering" in general?
My approach has always been to use a reputable ME (have used Bob Olhsson for years ever since he moved to Nashville) whenever possible, for anything that will be released on a large scale.

Welcome to Reason Talk, btw!
I only deliver MFiT files when asked to, and these days it's pretty rare to get that request. Maybe 5-10% of clients asked for it when MFiT first rolled out, but most realized that their aggregators didn't support it anyway so stopped asking.

I'd agree that there's really not a lot of use paying attention to something like LUFS measurements if you're just doing a mixdown, it's more of a mastering concern. The only exception would be if someone was really compressing all the elements of the mix so much that they might end up still being over the LUFS recommendations for streaming sites. Which brings up another point, which is that all this LUFS talk only really applies to streaming services like Spotify, YouTube, etc. And even then, NONE of these services have published guidelines on LUFS recommendations, it's all been figured out after the fact by indepenant engineers. So things could still change, ala Spotify recently going to -14LUFS versus -12LUFS like they were using before (I think it was -12, don't quote me). Very frustrating as a mastering engineer :) Here's more info on this for people who are curious, this is an article I wrote for warpacademy.com:

https://www.warpacademy.com/current-tre ... mastering/

Self mastering? I think it's fine if people do that, I even wrote a guide to help people master their own songs even though I'm a professional mastering engineer. I do think that if people are going to "master" their own music, the only real thing they need to do is worry about the final volume level in that step ala limiting. Everything else can be done in the mixdown with greater control than we mastering engineers usually get. I don't normally find that long complicated mastering chains for you own music makes music sense. In generally just seems to over-cook things more than anything.

Obviously I think there's very real value though in having someone unbiased with lots of experience handle the mastering if that's an option. Even if only for a couple of releases to provide a sort of yardstick to aim for in terms of what can be done to your songs.

Thanks for the welcome too, a friend pointed me to this thread as he thought I might be able to share some insights. Glad to be here!

User avatar
selig
RE Developer
Posts: 11685
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

21 Mar 2018

Tarekith wrote:
21 Mar 2018
selig wrote:
21 Mar 2018

Just curious, how much of your work is delivered as MFiT files: everything (in addition to .wav), or just by request?

I personally think folks get too hung up on the LUFS measurement for music mixes - just make the mix sound "good" (whatever that means to each individual) because the playback on most streaming sites will adjust it's final level (but not it's dynamics) anyway! The most I do is to check a mix next to another (either a commercial mix or one of my own) for reference IF I have any question about what I'm doing to the dynamics!

What are your thoughts on these subjects, and the idea of "self mastering" in general?
My approach has always been to use a reputable ME (have used Bob Olhsson for years ever since he moved to Nashville) whenever possible, for anything that will be released on a large scale.

Welcome to Reason Talk, btw!
I only deliver MFiT files when asked to, and these days it's pretty rare to get that request. Maybe 5-10% of clients asked for it when MFiT first rolled out, but most realized that their aggregators didn't support it anyway so stopped asking.

I'd agree that there's really not a lot of use paying attention to something like LUFS measurements if you're just doing a mixdown, it's more of a mastering concern. The only exception would be if someone was really compressing all the elements of the mix so much that they might end up still being over the LUFS recommendations for streaming sites. Which brings up another point, which is that all this LUFS talk only really applies to streaming services like Spotify, YouTube, etc. And even then, NONE of these services have published guidelines on LUFS recommendations, it's all been figured out after the fact by indepenant engineers. So things could still change, ala Spotify recently going to -14LUFS versus -12LUFS like they were using before (I think it was -12, don't quote me). Very frustrating as a mastering engineer :) Here's more info on this for people who are curious, this is an article I wrote for warpacademy.com:

https://www.warpacademy.com/current-tre ... mastering/

Self mastering? I think it's fine if people do that, I even wrote a guide to help people master their own songs even though I'm a professional mastering engineer. I do think that if people are going to "master" their own music, the only real thing they need to do is worry about the final volume level in that step ala limiting. Everything else can be done in the mixdown with greater control than we mastering engineers usually get. I don't normally find that long complicated mastering chains for you own music makes music sense. In generally just seems to over-cook things more than anything.

Obviously I think there's very real value though in having someone unbiased with lots of experience handle the mastering if that's an option. Even if only for a couple of releases to provide a sort of yardstick to aim for in terms of what can be done to your songs.

Thanks for the welcome too, a friend pointed me to this thread as he thought I might be able to share some insights. Glad to be here!
As for LUFS and streaming, as I (thought I) understood it, it doesn't matter what you do - the worst that will happen is that your mix will either be turned up or turned down to match the apparent loudness of the other tracks. So you can master it "loud" and have it turned down (and still sound like crap if you over-did it). Or you can master it "dynamic" and have it turned up and it will still sound as dynamic as you mixed it. Or you can instead go for some arbitrary LUFS value and give up control over how it sounds, but at least it won't be adjusted by the streaming service.

In other words, and again as I thought I understood it, all streaming services simply measure the LUFS, and apply positive or negative gain (or no gain) to your track. It's not like radio where you would hit the limiter differently depending on what you did in mastering (my brother is a broadcast engineer, so I vaguely understand that world). This is no different from the listener adjusting the volume of your track as it plays, the goal being that the listener won't HAVE to do this if the streaming service does it "right".

Help me understand if I'm not correct in these assumptions!
:)

As for self-mastering, I rarely do more than simple brick wall limiting by a few dB, preferring instead to get the mix right - unless I'm self-mastering an "album" of songs that all need to flow (and I base what I do off of the years of sitting and watching some great ME's do their thing, but not at all pretending I know how to do their job).
Selig Audio, LLC

Tarekith
Posts: 4
Joined: 21 Mar 2018

21 Mar 2018

The issue is that not all streaming services will turn up a quiet mix, but all of them will turn down a loud mix. I believe Spotify will not only turn up a mix if it’s quiet, but apply more limiting too if it’s too quiet. There’s still some debate about this as I understand it.

So you end up in a situation where if you make it too loud and squashed, it gets turned down and still sounds squashed. Or you make it too quiet it doesn’t get turned up and still sounds quiet compare to other songs. Or in Spotify’s case it might get turned up and limited more without you having control over this (very similar to radio in this last case).

Ideally then you want to aim for the loudness targets for each platform as close as possible to make sure it’s competitive in terms of loudness, but not getting any additional dynamics processing outside of your control. Of course, this is problematic because each streaming platform uses a slightly different target LUFS value, and most (all?) aggregators will only allow you to submit one set of files for the album.

AES and EBU have actual standards in place for streaming, with -16LUFS and -1dBTP being the primary targets. As of now, only Apple follows these guidelines though, and only for Soundcheck and Apple Music streaming. YouTube, pandora, Spotify etc all use a slightly higher reading of -13 or -14LUFS often with no acknowledgement of TP. It’s very annoying having a streaming standard approved and ratified, but not having the all the platforms follow these guidelines let me tell you!

For my clients I often advocate using the EBU standard of -16/-1 if they are concerned about streaming. iTunes is still the largest seller of digital music, and it IS an approved standard so let’s follow it. If you’re going to worry about dynamics In the first place, then do it right. Plus, even if your stuff doesn’t get turned up 2 more dB for say Spotify, it’s not that large of a difference in volume so still competitive.

Having said all that, 95% of the music I master is still squashed like it was in the cd days since that’s what most people want. Maybe not crazy squashed like some music, but still louder than -16LUFS. We’re making progress for sure, but the loudness wars are still far from over. Soon I hope!

RobC
Posts: 1832
Joined: 10 Mar 2018

21 Mar 2018

Thank everyone for all the input!
As usual, I add my thoughts to whatever grabs my mind for some interesting feedback.

I thought it's about no clipping in the music itself. So tops with very rounded edges, i.e. soft clipping.
I always work with 24 bit @ 192 kHz in Reason. (Automation, and compressors reacted more accurately; and it came in handy when I sample a single synth sound for playing.)
If I go Apple, then I don't want to do so just because of MFiT. No doubt their monitors are beautiful though.

Here I thought, it would revolutionize sound, and that I could experience the proper dynamics of songs. That The Prodigy had a trashed MFiT album, is a thing, but when Jean Michel Jarre's music, even Oxygen 3 went to the loudness war, I was really disappointed.
Modern mastering is an impossible attempt at making a "one size fits all".

Good means, in my opinion, when it's perfectly suited for the human hearing. I'll determine my target loudness sometime by creating a grey noise, and if it's not too spiky, I'll tops add 1 dB loudness to the value, to compensate the suggested headroom. After all, when many sounds play together, the waveform gets crowded. Likewise, a noise sound is a very full example and a great guide, what dynamics not to exceed. (Clearly, nobody likes short sounds as loud as a balloon popped next to our ears. And let's not forget that our ear works like a compressor after a given SPL, so there's only so much dynamics we can allow. But I'm sure it will be somewhere around -20 LUFS. The fabulous pink noise is around -16 LUFS - maybe that also has something to do with it, or just a coincidence?)
I'm never gonna reference other mixes, but compare, maybe. My gray noise will be a good guide.

I will make use of my own target value, to make sounds exactly dynamic for that, not just tonally fitting. I just hate, though when I forget to pull my audio levels back, and listen to loudness war trash at triple strength by accident. Anyway, it's good for keeping ourselves at bay.
Then again, I might do some research about how our ear "compresses" sound at a certain SPL and take that into account when setting my target LUFS with my gray noise. I mean, if we have to turn up a very dynamic song to enjoy it fully, and our ear "compresses" it, then there's no point to make it THAT dynamic. So, there are many factors to consider, i.e. I'm not planning to start a pointless "quietness peace" either. But definitely not the "loudness war".

The fact that I want to develop a standard, suited for the human hearing, hopefully shows my dedication to audio engineering. Current standards rather consider devices than people.

Robert Katz said it's about how you get there (loudness). I'm not gonna twist his words, so I'll trust he meant the fact that once the sound is suited for the human ear, it will appear "loud" and clear. (OMG, rhymes!! - Allow me this much immaturity. - Would be perfect for a nerdy, audio engineer kind of "geeksta rap".)

Okay, I'm only half way through comments. So many good thoughts from everyone. Gonna take a little break.

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

21 Mar 2018

RobC wrote:
21 Mar 2018
Robert Katz said it's about how you get there (loudness). I'm not gonna twist his words, so I'll trust he meant the fact that once the sound is suited for the human ear, it will appear "loud" and clear. (OMG, rhymes!! - Allow me this much immaturity. - Would be perfect for a nerdy, audio engineer kind of "geeksta rap".)
This goes even further both ways - if the articulation of the guitarist is like mumbling in vocals, if the arrangement puts too many instruments in the same frequency range, if the timing of all tracks varies wildly then it will never sound "in your face" or "super clear" or anything like that.

There seems to be a BIG misconception about what mastering is these days. Some people seem to think its what makes Lady Gaga's music sound better than theirs - they're mistaken :)

RobC
Posts: 1832
Joined: 10 Mar 2018

21 Mar 2018

I believe, that when mastering as a profession, best would be to send a signature mastering, and the requested one, if possible. I know if I manage my planned standard, all I'll need to do to the finished master, is to equalize and reduce dynamics, as the client wishes.
Other than that: how about mastering several ones: for the phone, the kitchen radio, the toilet computer, the headphones, the car, etc. JK.

I'm gonna use my reference "quietness" for pre-mastering sounds. Honestly, I barely want to touch my finished mix. Here comes a debate though: in a dynamic song, what if some sounds appear too loud? Think sibilance and a hi-hat sounding at once. Or bass and kick. If the mix was individually perfectly pre-mastered, then applying a final mastering would ruin the natural tonality, though and also leave a mark on it and affect every sound. Then again, it's not different in live orchestral music, so if I can pull off a perfect mix, then there's little final mastering work to go.
Oh! Mind you, the mentioned gray noise could be used to set up the loudness of any sound. So, I'd never set anything's audio level by numbers. LUFS can be very inaccurate in some cases.

Atm, I'm like screw it, for my own music, I'll set up the dynamics the way I want. Any website will find their way to change it in some way if they want. As for loudness competence? Think with the people's head. What do you do if you listen to a dynamic orchestral song for example, or an older hit song? You turn up the audio level. (Ever since Bob Katz made a fuss about SPL, audio level, volume, level, I don't dare to use the latter words. xD Likewise there's no "best" since Tweakheadz.)

Hey, there's no problem with sharing mastering secrets! Nobody will be able to "steal" your signature sound - that's what really matters.
It's so true that there's no need for many mastering steps to achieve a perfect sound.
Personally, I'd never limit. I'd take into consideration what frequencies need a boost, or lowering for the human hearing; then I'd create a frequency split, decrease what needed boost, increase what needed a decrease, then start soft clipping all of them - if there will be any when the audio level is raised. Finally, (after setting the signal tone back to normal) if there are any peaks left in the merged sound, only then I'd soft clip the whole audio. This is just a theory, though, but might work. I'm a bit worried, that high frequencies have spiky transients, and the sub bass region has very long waveforms, obviously. Then again, if I compensate the frequency levels in the above inverted way, I'm worried how it might affect the tonality... Positively, fitting the human hearing, balanced/even, or would it sound noisy? Or would it be noisy without the compensation? Ahh, it's getting late again to think...

Self mastering... I want to challenge this phenomenon that one wouldn't be able to perfectly master their own work. - After I'm done with my desired 'mastering standard' creation.

All in all, if something is perfectly suited to the human hearing, I believe even if any platform does damage to them, they still can shine.

I got really adventurous with sound. It started with the loudness war. I couldn't figure out how to make it sound good at modern audio levels. Then I found out: it can't sound good, cause it destroys the sound. Then I looked at vinyl, thinking that's the shiz. xD Sure, with reduced bass, reduced treble, reduced stereo possibilities. I even started suiting my music for a worse format, just because songs sounded good on vinyl - because at least physics made it impossible for people to do poor mastering. After, it struck my mind, that all these modern standards don't do the trick either - and some research about human hearing, too - that we don't have to listen to what others say, but how about listening to our ears for a change. "Listen to your ears". I know it sounds stupid, but you know what I mean... So, now I really am for this new standard idea, that might get that desired perfect sound.

To conclude, I say, don't compete loudness. Who do you make music / master for? The competence, or the people who will listen to the music? Be competitive by surpassing your own capabilities. (Looks like I'm turning into Bruce Lee, now, huh. Time to go to bed.)

RobC
Posts: 1832
Joined: 10 Mar 2018

21 Mar 2018

normen, how I think, is rather pre-mastering good audio tracks to begin with.

A proper equalization can make noise sound relaxing, though if we only get some shiz for mastering. ;-)

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

21 Mar 2018

Here is what I in my experience happens when a proper track is mastered in a mastering studio, boiled down:

A guy sits in his treated room in front of his very accurate speaker system that he has heard thousands of tracks on. He puts on your track and instantly hears that it has a wee bit more bass than most productions these days an corrects that using an EQ. Then he uses a limiter that maybe works a bit on the snare and bass drum transients but overall not much at all to get to the desired output volume (i.e. just below 0dBFS in most cases). He listens to the track a few times and then bounces it.

Here is what happens when somebody comes to a mastering studio with a "track from hell" (and the engineer has for some reason no chance to simply reject the job):

The engineer tries to squish the track into submission with a multiband compressor and limiter to have it at least sound same-y all the way though. Then he tries all kinds of tricks to get the parts that sound at least acceptable to "spread out" into the rest using things like overtone generators (getting "good" bass and mid into the highs) and sub-bass generators (getting mids into the bass) and might even apply a "listening room" reverb with a very short tail to somehow make the inconsistent depth staging sound coherent. He will try to use very steep notch EQs to get at least some clarity into the important ranges, he will try even more stuff to at least get some impression of "change to the better" -- but the reality is that you can't polish a turd.

Here is what happens when somebody tries to master at home after watching youtube videos for a week:
He spends 800€ on plugins that were mentioned in the videos and creates a mastering chain containing ALL the plugins \o/ ...he then hears that his track sounds about 800€ better (which they probably don't ;))

Tarekith
Posts: 4
Joined: 21 Mar 2018

21 Mar 2018

LOL, that's probably not far off. :)

RobC
Posts: 1832
Joined: 10 Mar 2018

22 Mar 2018

That's a gray area between sound restoration and mastering. Sound engineering work fits that definition more.

It is common for some big professional studios to buy a huge SSL console "for the show", where it's not even hooked up. I guess that's the elder, next level YouTube mastering engineer. Still better than the 15 year old self-proclaimed mastering engineer with the pirated Fruity Loops and cheap Hi-Fi system, that are like "muh stewdeeoh".

I'm really not for generating sound to a finished mix. I understand your points, and the use, but my philosophy is, that what wasn't there, shouldn't be added; tops what's buried, should be attempted to be dug up. It's the same with the whole "sterile" sound - where some engineers use expensive analog gear and try to convince that the added distortion, hum, and noise is so very desirable. I'm like no. Make the mix sound so good, that it doesn't need artificial sound to fill up the stereo field, or the tone. Tape noise and vinyl noise did add a bit of extra depth, and some glitch effects. With the clean digital world, there's simply more room - that's barely used up in the problematic cases. But if there's no way to reject... figures. Gotta communicate with the client, what they really want and letting them know the possibilities, before making drastic changes.

I mean, how can one stand that? It's practically fooling the client with some added effect. And when people ask for loudness war trash?
I'd go nuts before I know it. Of course, I like challenges; it inspires to develop interesting sound processing methods. ... Okay, I can't decide if it's fun or annoying - but definitely interesting.

P.S. to the treated sound system definition, equalization to the engineer's hearing (should) be included, in my opinion. A true response sounds better than anything.

RobC
Posts: 1832
Joined: 10 Mar 2018

22 Mar 2018

So, I've been thinking... My multi-band soft clipping (which sort of is an immediate limiting without any release time affecting the dynamics), wouldn't work with the inverted equalization, because the audio levels would be the same as white noise. Now, the point would be that it would take care of extreme peaks, equally, in a "fair way" in every frequency band. Thus, before doing so, each band has to be set to the audio levels of sine waves. After all, each sine wave has the same true peak, only the wavelength is different.

Simplifying:
Multi-band distortion will only affect every band equally (regarding TP, not AVG audio levels) if we talk about sine waves.
White noise won't be equal, because it's based on average audio levels, so there are drastically different audio levels in each band.
Now I wonder what noise it would result, if I'd equalize a noise sample to sine waves. Hold on... could it be gray noise? If so, then why on earth do I even consider changing the tone of a sound that already is equalized for gray noise (meaning human hearing)...

In theory...

Okay, err, help?

RobC
Posts: 1832
Joined: 10 Mar 2018

23 Mar 2018

Technically, yes. Though let's not forget, sine waves only sound equally loud, if the sound system previously has been calibrated to your hearing.

People usually get confused at first by what I'm saying - could be that some information I leave out, is not as obvious for everyone (like the above) - but after a while, it usually makes sense to everyone.

Looks like I'm left out, alone with solving problems, ideas I come up with.

And I wonder if you know
How it really feels
To be left outside alone

Ah, damn what a good remix there was of that song! :P

Sarcastic butt-hurt thanks for not caring!



xD

Post Reply
  • Information
  • Who is online

    Users browsing this forum: dioxide and 27 guests