How is the best way to render audio in highest quality?
Since 10.3 and the possible delays in feedback loops i came again to a few questions where i still do not have the proper answers and i hope some ppl here can help:
How can i get the best quality when rendering the final song?
Does it make sense to increase the sampling rate, reduce the buffer size and add the dither if i want to create a 44khz audio file?
This means in detail for mix down/bounce/render:
1. How does sampling rate effect this?
2. How does dither effect this?
3. How does buffer size effect this?
4. What else do i need to considere to get the best possible audio quality through rendering and the final audio file?
How can i get the best quality when rendering the final song?
Does it make sense to increase the sampling rate, reduce the buffer size and add the dither if i want to create a 44khz audio file?
This means in detail for mix down/bounce/render:
1. How does sampling rate effect this?
2. How does dither effect this?
3. How does buffer size effect this?
4. What else do i need to considere to get the best possible audio quality through rendering and the final audio file?
Reason12, Win10
The honest answer is: it doesn't matter, if it sounds good it is good.Loque wrote: ↑17 Apr 2019Since 10.3 and the possible delays in feedback loops i came again to a few questions where i still do not have the proper answers and i hope some ppl here can help:
How can i get the best quality when rendering the final song?
Does it make sense to increase the sampling rate, reduce the buffer size and add the dither if i want to create a 44khz audio file?
This means in detail for mix down/bounce/render:
1. How does sampling rate effect this?
2. How does dither effect this?
3. How does buffer size effect this?
4. What else do i need to considere to get the best possible audio quality through rendering and the final audio file?
For my personal preferences: I never use a sample rate over 44.1kHz (not worth it), dither at export, and keep buffer size the same as what I've worked with in the song. The key is that if I don't change anything, I know it'll sound the same when I export and then I can focus on making it sound like I want and don't worry.
If you're a crazy person: always work at 192kHz, 64 sample buffer, never export to 44kHz to begin with, only sell your music to dogs and Pono-users. Or sell your computer, move out in the woods, only make music using natural physical objects where the only latency is the time it takes your brain to tell your arms to hit the two sticks together.
Whenever there's a thread like this, I direct your attention to this video: https://www.xiph.org/video/vid2.shtml
You forgot to add "...and work on a 12'' 4K screen"
But fret not - I've a new computer and won't complain about that for a bit
Some plugins sound better at 96kHz. I generally don't work at this setting because it puts a lot of strain on the CPU, but there are reasons to work at 96kHz. It's not that it's just for unnecessarily high quality audio. In Reason, CV resolution is also affected by audio rate, so there is another reason someone might want to work at a higher rate. I don't think many people are publishing at 96kHz but it can sometimes make sense to export at 96kHz and downsample in an editor with good conversion.
Some hardware might also require working at a specific sample rate, such as the Access Virus or Roland's Aira series. So a change of hardware might impact things. I think this issue is important enough to warrant an honest and clear blog post from Propellerhead, talking about how the changes can affect things. There are lots of reasons to start messing around with audio settings in Reason even if you only use REs and stock devices.
Some hardware might also require working at a specific sample rate, such as the Access Virus or Roland's Aira series. So a change of hardware might impact things. I think this issue is important enough to warrant an honest and clear blog post from Propellerhead, talking about how the changes can affect things. There are lots of reasons to start messing around with audio settings in Reason even if you only use REs and stock devices.
I will try to answer this as I don't think they have been answered.Loque wrote: ↑17 Apr 2019This means in detail for mix down/bounce/render:
1. How does sampling rate effect this?
2. How does dither effect this?
3. How does buffer size effect this?
4. What else do i need to considere to get the best possible audio quality through rendering and the final audio file?
1. The delay time is double at 44.1kHz than at 96kHz. This is because Buffer rate is measured in samples, and 96kHz is 'playing' these samples at a faster rate.
2. Dither is for bit depth, and is unaffected by the changes in 10.3
3. Buffer size affects CV and automation timing. If your CV triggers or automation needs to be very precise then you might notice delays at larger buffer sizes. Chaining CV triggers can cause noticeable delay.
4. General audio quality is unaffected by the changes in 10.3.
You can try some experiments by putting the buffer size to a ridiculously high 4,096 samples and switching sample rates. Trigger via CV alongside MIDI and with some CV feedback loops and you'll get an idea of how things are working.
use your ears.
beyond that, if you’re bothered by the potential for latency with the new settings, just dial everything in and bounce to a track at 64 sample buffer. then you can bump up to higher buffer settings for performance.
beyond that, if you’re bothered by the potential for latency with the new settings, just dial everything in and bounce to a track at 64 sample buffer. then you can bump up to higher buffer settings for performance.
REASON 11 concept...
- Boombastix
- Competition Winner
- Posts: 1929
- Joined: 18 May 2018
- Location: Bay Area, CA
That is a good video, and should probably be made as a sticky link in this forum.MattiasHG wrote: ↑17 Apr 2019If you're a crazy person: always work at 192kHz, 64 sample buffer, never export to 44kHz to begin with, only sell your music to dogs and Pono-users. Or sell your computer, move out in the woods, only make music using natural physical objects where the only latency is the time it takes your brain to tell your arms to hit the two sticks together.
Whenever there's a thread like this, I direct your attention to this video: https://www.xiph.org/video/vid2.shtml
Just a comment Mattias: As an official PH rep, probably for the better if you leave sarcasm aside. It can very easily be misinterpreted.
10% off at Waves with link: https://www.waves.com/r/6gh2b0
Disclaimer - I get 10% as well.
Disclaimer - I get 10% as well.
That video is great, but doesn't apply to soft synths and effects, it only applies to band-limited signals. Many software plug-ins will create signals with content above the Nyquist limit. These frequencies will alias as audible artifacts. Subtractor is horrible for this, Thor is pretty bad. Basically if you're using an older synth, that doesn't offer over-sampling, it's not a bad idea at all to render it's audio at a much higher rate (96k is 2x over-sampling, 192k is 4x oversampling), and then re-import it into a lower rate project so Reason does a high-quality re-sample, and prevents the bad sounding aliasing.Boombastix wrote: ↑17 Apr 2019That is a good video, and should probably be made as a sticky link in this forum.
- Data_Shrine
- Posts: 517
- Joined: 23 Jan 2015
Yes indeed, and so it makes Mattias answer strange, and misleading.ScuzzyEye wrote: ↑17 Apr 2019That video is great, but doesn't apply to soft synths and effects, it only applies to band-limited signals. Many software plug-ins will create signals with content above the Nyquist limit. These frequencies will alias as audible artifacts. Subtractor is horrible for this, Thor is pretty bad. Basically if you're using an older synth, that doesn't offer over-sampling, it's not a bad idea at all to render it's audio at a much higher rate (96k is 2x over-sampling, 192k is 4x oversampling), and then re-import it into a lower rate project so Reason does a high-quality re-sample, and prevents the bad sounding aliasing.Boombastix wrote: ↑17 Apr 2019That is a good video, and should probably be made as a sticky link in this forum.
And so, why would any soft synth/effect have an oversampling option if it doesn't change anything ? It does, and it can be heard. I mean, if we think like this, then 12khz sounds the same as 44.1khz.. (yes this is an exaggeration but just to make a point). You can like the aliasing effect on some sounds, if that's what you're looking for, if not, it can sound better (i.e. smoother) with a higher sample rate.
And.. PH should think about adding a downsampling option at export.
@loque If you're interested in recording audio and not just work with synths and samples inside Reason then working at 48khz or above is important because you gain lots of headroom to mix louder more comfortably. Also if you use for instance izotope RE maximizer and activate dithering in the RE then you should always DISABLE Reason dithering while exporting audio.
Thanks for the hints and tips. I guess I have to consider higher sampling rates mostly for recording and if I have any kind of audio modulation (FM, AM, RM,...) or to deal aliasing, distortion or any other type of frequency reflections/sidebands.
All this can also sound good and nasty in lower resolutions, so I need to check for myself what I like more. Using Zero with FM and it's oversampling/undersampling is a good example for the impact on the sound.
The headroom is a good point too.
All this can also sound good and nasty in lower resolutions, so I need to check for myself what I like more. Using Zero with FM and it's oversampling/undersampling is a good example for the impact on the sound.
The headroom is a good point too.
Reason12, Win10
- SoundObjects
- Posts: 119
- Joined: 10 Dec 2018
I think this guy got a good point
The Universe Is Vibrating
Some people say they hear a difference in high frequencies between 44.1 kHz and 48 kHz.
I don't hear that.
CD quality (44.1 kHz at 16 bits) is the base for mp3 and most streaming services.
Nearly all streaming services and of course mp3 use lossy compression,
so that's the point where your sound gets destroyed.
watch this (and always dither):
http://productionadvice.co.uk/bit-depth-and-resolution/
Buffer size doesn't affect rendering to an audio file.
Buffer size is relevant for live input/output on your audio interface.
Use a limiter and avoid inter-sample-peaks.
I set my limiter to -1 dB to avoid inter-sample-peaks safely.
Be aware of your target file format:
Do you master for Youtube, Spotify, iTunes?
Do you plan to render to mp3?
Start at recording time:
If you record via microphones, avoid any unwanted noise.
Do the best recording you can.
Avoid the "fix it in the mix".
Record to a click track to get the best timing.
Correct all timing issues.
On voices: Painstakingly correct every lousy pitch-error a singer makes.
On all instruments: Painstakingly adjust volume automation.
See what Pensado does only with automation:
And at last: Use reference tracks.
Fair enough, though I'm primarily a person. I have my quirks, my way of talking, my personality, my experience, opinions and passions. I don't always want to compromise that just because I'm talking about things related to what I do. In general it's probably better to view all my posts as something written by a person, not a company.Boombastix wrote: ↑17 Apr 2019That is a good video, and should probably be made as a sticky link in this forum.
Just a comment Mattias: As an official PH rep, probably for the better if you leave sarcasm aside. It can very easily be misinterpreted.
Well, speaking as a music maker primarily and not a PH employee: most times most people won't hear a problem regardless, especially not in a song. Sure, playing really high notes on Subtractor have very, very audible aliasing but that's also a synth from ~1999 played at the top of its range. I can even like that sound, part of its character in the same way some analog gear is loved because they had sucky parts (see: Juno-106 chorus for example, super noisy). Additionally, I'm pretty sure a lot soft synths are band-limited. It's both efficient and has great audible results for its intended purpose (i.e. making music). Even Eurorack modules like Mutable Plaits use band-limited synthesis, and that definitely doesn't sound bad in any way.Data_Shrine wrote: ↑17 Apr 2019Yes indeed, and so it makes Mattias answer strange, and misleading.ScuzzyEye wrote: ↑17 Apr 2019That video is great, but doesn't apply to soft synths and effects, it only applies to band-limited signals. Many software plug-ins will create signals with content above the Nyquist limit. These frequencies will alias as audible artifacts. Subtractor is horrible for this, Thor is pretty bad. Basically if you're using an older synth, that doesn't offer over-sampling, it's not a bad idea at all to render it's audio at a much higher rate (96k is 2x over-sampling, 192k is 4x oversampling), and then re-import it into a lower rate project so Reason does a high-quality re-sample, and prevents the bad sounding aliasing.
And so, why would any soft synth/effect have an oversampling option if it doesn't change anything ? It does, and it can be heard. I mean, if we think like this, then 12khz sounds the same as 44.1khz.. (yes this is an exaggeration but just to make a point). You can like the aliasing effect on some sounds, if that's what you're looking for, if not, it can sound better (i.e. smoother) with a higher sample rate.
And.. PH should think about adding a downsampling option at export.
My personal opinion is that people think too much about what's scientifically correct or perfect, without thinking of the musical implications. Using Thor in a song at 44.1 kHz won't make your music worse, it'll likely sound great and I'd wager nobody will listen to it and think "jeez, great song but it's totally ruined by that synth line... I can hear the aliasing!". Heck, most people consume music via streaming platforms on headphones. It'd be a much bigger issue if it's an unbalanced mix, a super compressed master, an uninspired patch or even a lacking song structure. Know what I mean? If the song sounds good, it's good. I believe focusing on the musical result and not the tech removes a lot of anxiety in music making.
Then again, people are different so in the end everyone are free to do and think exactly what they like. Just sharing my perspective!
my 2 cents:
If it is a song, people care about the lyrics and the emotions coming from the singer's voice. A human voice overrules every kick drum sound and every lead synth.
People who do not make electronic music by themselves usually don't know even the meaning of "aliasing" in the context of DSP.
If you're listening to music from streaming services, the destruction made by lossy compression algorhytmns may be a problem soundwise. But try for yourself if you can hear a difference. Trust your own ears!
And at last: If you care too much about sample rate and all that stuff, you waste a lot of time you could otherwise use for making music.
If it is a song, people care about the lyrics and the emotions coming from the singer's voice. A human voice overrules every kick drum sound and every lead synth.
People who do not make electronic music by themselves usually don't know even the meaning of "aliasing" in the context of DSP.
If you're listening to music from streaming services, the destruction made by lossy compression algorhytmns may be a problem soundwise. But try for yourself if you can hear a difference. Trust your own ears!
And at last: If you care too much about sample rate and all that stuff, you waste a lot of time you could otherwise use for making music.
Mattias and Ahornberg hit the nail dead-on.
no listener hears music the way some audio engineers *think* about it.
at the point of consumption, every single mix decision you’ve made (or not made) becomes intentional. that aliasing you’re hearing now has purpose. it’s not a flaw, it’s an outcome.
no listener hears music the way some audio engineers *think* about it.
at the point of consumption, every single mix decision you’ve made (or not made) becomes intentional. that aliasing you’re hearing now has purpose. it’s not a flaw, it’s an outcome.
I find this pretty much right but just wanted to point out that musicians and producers are also listeners.Ahornberg wrote: ↑18 Apr 2019my 2 cents:
If it is a song, people care about the lyrics and the emotions coming from the singer's voice. A human voice overrules every kick drum sound and every lead synth.
People who do not make electronic music by themselves usually don't know even the meaning of "aliasing" in the context of DSP.
If you're listening to music from streaming services, the destruction made by lossy compression algorhytmns may be a problem soundwise. But try for yourself if you can hear a difference. Trust your own ears!
And at last: If you care too much about sample rate and all that stuff, you waste a lot of time you could otherwise use for making music.
In my case I do often ignore human singing and to 95% ignore lyrics because many times the sound of an instrument or synth makes me feel more than a human singer, but maybe I am an android and still haven't found out
- Jackjackdaw
- Posts: 1400
- Joined: 12 Jan 2019
My first bounce of a track from Reason, I set the export sample rate to 16bit 44khz. The result sounded markedly flat compared to the raw mix . I can't remember if dithering was switched on or not. My next bounce I did at 24bit 48khz, it sounded the same as the raw mix. My ears were happy. I dont so much care how mangled it gets down the line as long as my master sounds exactly how I made it.
Make a pop-up with this every time Reason starts, with no option in Preferences do disable itMattiasHG wrote: ↑18 Apr 2019My personal opinion is that people think too much about what's scientifically correct or perfect, without thinking of the musical implications. Using Thor in a song at 44.1 kHz won't make your music worse, it'll likely sound great and I'd wager nobody will listen to it and think "jeez, great song but it's totally ruined by that synth line... I can hear the aliasing!". Heck, most people consume music via streaming platforms on headphones. It'd be a much bigger issue if it's an unbalanced mix, a super compressed master, an uninspired patch or even a lacking song structure. Know what I mean? If the song sounds good, it's good. I believe focusing on the musical result and not the tech removes a lot of anxiety in music making.
So I assume, your audio interface is set to 48 kHz. Am I right?Jackjackdaw wrote: ↑18 Apr 2019My first bounce of a track from Reason, I set the export sample rate to 16bit 44khz. The result sounded markedly flat compared to the raw mix . I can't remember if dithering was switched on or not. My next bounce I did at 24bit 48khz, it sounded the same as the raw mix. My ears were happy. I dont so much care how mangled it gets down the line as long as my master sounds exactly how I made it.
- Jackjackdaw
- Posts: 1400
- Joined: 12 Jan 2019
I can't remember , I would check but I'm away from home atm. Once I found something that worked for me I just set it and forget it.
I do think its a bit odd to cling to the CD sample rate though when better quality is easily attainable with today's gear. I appreciate the argument about what the ears can hear. But if the headroom is free why not take it?
I do think its a bit odd to cling to the CD sample rate though when better quality is easily attainable with today's gear. I appreciate the argument about what the ears can hear. But if the headroom is free why not take it?
- diminished
- Competition Winner
- Posts: 1880
- Joined: 15 Dec 2018
That's totally up to you, though. You just have to live with the fact that if you want your music sound good on a CD (16bit/44.1kHz), you need to translate your mix to that medium.Jackjackdaw wrote: ↑18 Apr 2019I do think its a bit odd to cling to the CD sample rate though when better quality is easily attainable with today's gear. I appreciate the argument about what the ears can hear. But if the headroom is free why not take it?
Personally I don't even bother with downsampling and stuff, I just export 24bit/48kHz and let the Youtube Studio upload converter thing handle the lossy conversion part. Works fine and it's probably all I need for my (limited) usage cases.
The Reason projects contain all there is, so if needed I can come back to those.
Most recent track: resentment (synthwave) || Others: on my YouTube channel •ᴗ•
it’s a matter of diminishing returns. sure it’s *technically* better quality, but if the vast majority of listeners can’t tell the difference, then from a purely practical standpoint there is no difference.Jackjackdaw wrote: ↑18 Apr 2019I do think its a bit odd to cling to the CD sample rate though when better quality is easily attainable with today's gear. I appreciate the argument about what the ears can hear. But if the headroom is free why not take it?
to me, I’d rather have more headroom in processing power than (arguably) inaudibly on the back end.
of course personal preference will always win out (and should). I just don’t personally understand the fidelity for fidelity’s sake approach.
I did some tests and it also affects audio Export. Try it! Create a feedback loop with Kong and do some different exports at different buffer settings.
-
- Information
-
Who is online
Users browsing this forum: No registered users and 35 guests