How is the best way to render audio in highest quality?

This forum is for discussing Reason. Questions, answers, ideas, and opinions... all apply.
User avatar
dioxide
Posts: 1788
Joined: 15 Jul 2015

18 Apr 2019

To everyone saying things like "use your ears" and speaking about how music is more important than high quality production, I'd like to remind you that the subject of the thread is "...the best way to render audio in highest quality?"

The subject we are talking about is the best way to get good quality sound out of Reason. It isn't whether or not you should do it, that is down to each individual. But we can assume that the OP has thought about this, and decided that they do want to do this, before asking the question. The only responses that should be here are people who are advising on that subject, not people telling him that what he wants to do is pointless. I mean a lot of comments might be right, or the right decision for a lot of musicians, but if you want to have a discussion on high quality audio, that's what this thread concerns.

It's like asking for directions on how to get somewhere, only for the person you asked to reply "oh, you don't want to go there, I've been and it was a waste of time".

User avatar
Ahornberg
Posts: 1904
Joined: 15 Jan 2016
Location: Vienna, Austria
Contact:

18 Apr 2019

dioxide wrote:
18 Apr 2019
Ahornberg wrote:
18 Apr 2019


Buffer size doesn't affect rendering to an audio file.
Buffer size is relevant for live input/output on your audio interface.
I did some tests and it also affects audio Export. Try it! Create a feedback loop with Kong and do some different exports at different buffer settings.
for 10.3 this is true, before 10.3 buffer size didn't matter

User avatar
guitfnky
Posts: 4411
Joined: 19 Jan 2015

18 Apr 2019

dioxide wrote:
18 Apr 2019
To everyone saying things like "use your ears" and speaking about how music is more important than high quality production, I'd like to remind you that the subject of the thread is "...the best way to render audio in highest quality?"

The subject we are talking about is the best way to get good quality sound out of Reason. It isn't whether or not you should do it, that is down to each individual. But we can assume that the OP has thought about this, and decided that they do want to do this, before asking the question. The only responses that should be here are people who are advising on that subject, not people telling him that what he wants to do is pointless. I mean a lot of comments might be right, or the right decision for a lot of musicians, but if you want to have a discussion on high quality audio, that's what this thread concerns.

It's like asking for directions on how to get somewhere, only for the person you asked to reply "oh, you don't want to go there, I've been and it was a waste of time".
this is a forum where people are free to chime in with their own opinions. there’s nothing wrong with using such forums to attempt to disavow people of their misguided audio mixing quests. :lol:

in all seriousness though, not everyone defines “high quality audio” the same way. I know many people who would define something that’s musical but not pristine as having a higher audio quality than something that’s pristine but not musical. there are plenty of recordings done on tape from the 50s and 60s, or on ADAT tapes in the 90s which sound better than modern recordings done at 192k sample rates and 24 bitrate depths. the idea that you can somehow separate the audio heard on the back end from the recording fidelity on the front end just doesn’t hold up.
I write music for good people

https://slowrobot.bandcamp.com/

User avatar
Exowildebeest
Posts: 1553
Joined: 16 Jan 2015

18 Apr 2019

I'd say, definitely disable that new 10.3 CPU saving option, if you want accurate CV timing on export (if what Dioxide says is correct about the export being tied to buffer settings) - you'd want that rendered the old 64 samples way if you're doing CV drums or something that requires proper timing. At buffer 512, at 48khz, a 1/64th note is only 1500 samples... (according to one of these calculators: http://mp3.deepsound.net/eng/samples_calculs.php )

User avatar
Data_Shrine
Posts: 517
Joined: 23 Jan 2015

18 Apr 2019

MattiasHG wrote:
18 Apr 2019
Boombastix wrote:
17 Apr 2019
That is a good video, and should probably be made as a sticky link in this forum.
Just a comment Mattias: As an official PH rep, probably for the better if you leave sarcasm aside. It can very easily be misinterpreted.
Fair enough, though I'm primarily a person. I have my quirks, my way of talking, my personality, my experience, opinions and passions. I don't always want to compromise that just because I'm talking about things related to what I do. In general it's probably better to view all my posts as something written by a person, not a company. :)

Data_Shrine wrote:
17 Apr 2019


Yes indeed, and so it makes Mattias answer strange, and misleading.

And so, why would any soft synth/effect have an oversampling option if it doesn't change anything ? It does, and it can be heard. I mean, if we think like this, then 12khz sounds the same as 44.1khz.. (yes this is an exaggeration but just to make a point). You can like the aliasing effect on some sounds, if that's what you're looking for, if not, it can sound better (i.e. smoother) with a higher sample rate.

And.. PH should think about adding a downsampling option at export.
Well, speaking as a music maker primarily and not a PH employee: most times most people won't hear a problem regardless, especially not in a song. Sure, playing really high notes on Subtractor have very, very audible aliasing but that's also a synth from ~1999 played at the top of its range. I can even like that sound, part of its character in the same way some analog gear is loved because they had sucky parts (see: Juno-106 chorus for example, super noisy). Additionally, I'm pretty sure a lot soft synths are band-limited. It's both efficient and has great audible results for its intended purpose (i.e. making music). Even Eurorack modules like Mutable Plaits use band-limited synthesis, and that definitely doesn't sound bad in any way.

My personal opinion is that people think too much about what's scientifically correct or perfect, without thinking of the musical implications. Using Thor in a song at 44.1 kHz won't make your music worse, it'll likely sound great and I'd wager nobody will listen to it and think "jeez, great song but it's totally ruined by that synth line... I can hear the aliasing!". Heck, most people consume music via streaming platforms on headphones. It'd be a much bigger issue if it's an unbalanced mix, a super compressed master, an uninspired patch or even a lacking song structure. Know what I mean? If the song sounds good, it's good. I believe focusing on the musical result and not the tech removes a lot of anxiety in music making.

Then again, people are different so in the end everyone are free to do and think exactly what they like. Just sharing my perspective! :)

I agree with you, in that we can think too much in technical terms, and not in musical ones. I wonder if it's because the line has been blurred between musicians, mixers & master engineers. A kind of anxiety brought forward by too much technology.

I remember when I didn't know a thing about sample rates, bit depth, dithering... indeed, it really was less stressful to make music. Liberating even. Thanks for bringing that up. :puf_smile:

Jmax
Posts: 665
Joined: 03 Apr 2015

20 Apr 2019

dioxide wrote:
18 Apr 2019
To everyone saying things like "use your ears" and speaking about how music is more important than high quality production, I'd like to remind you that the subject of the thread is "...the best way to render audio in highest quality?"

The subject we are talking about is the best way to get good quality sound out of Reason. It isn't whether or not you should do it, that is down to each individual. But we can assume that the OP has thought about this, and decided that they do want to do this, before asking the question. The only responses that should be here are people who are advising on that subject, not people telling him that what he wants to do is pointless. I mean a lot of comments might be right, or the right decision for a lot of musicians, but if you want to have a discussion on high quality audio, that's what this thread concerns.

It's like asking for directions on how to get somewhere, only for the person you asked to reply "oh, you don't want to go there, I've been and it was a waste of time".
I think what people are saying is, don't be overly technical with your process because a) it doesn't really matter, if it sounds good it is good. b) no one else can tell or cares anyway, they're just looking for nice tune. c) it spoils a bit of the fun of the creative process. Just enjoy the ride and let go.

User avatar
jam-s
Posts: 3044
Joined: 17 Apr 2015
Location: Aachen, Germany
Contact:

20 Apr 2019

SoundObjects wrote:
18 Apr 2019
I think this guy got a good point ;)

Well, he also fell into the stair steps trap. Let me state this in al clarity: There no stair step will ever hit your ear. The laws of physics do not allow for any discontinuity there. Even the sharpest attack that can be made will not lead to an instantaneous jump in air pressure or even just the voltage in the line to the speaker or even the (pre-)amp. And having more intermediate values does not help in having sharper attacks anyway. The Xiph.org video is giving a much better and actually correct explanation.

Still his main point is valid: Don't sweat it. Sample rate is not going to matter much esp. if you have much to learn on mixing.

jlgrimes
Posts: 661
Joined: 06 Jun 2017

20 Apr 2019

Loque wrote:
17 Apr 2019
Since 10.3 and the possible delays in feedback loops i came again to a few questions where i still do not have the proper answers and i hope some ppl here can help:

How can i get the best quality when rendering the final song?
Does it make sense to increase the sampling rate, reduce the buffer size and add the dither if i want to create a 44khz audio file?

This means in detail for mix down/bounce/render:
1. How does sampling rate effect this?
2. How does dither effect this?
3. How does buffer size effect this?
4. What else do i need to considere to get the best possible audio quality through rendering and the final audio file?
I would think the buffer size is the biggest change but should only come into play if you use CV. 64 samples should get the original sound.

Sampling rate, dither, bit depth should function the same as before.

Generally 44.1khz or 48khz, should be all the quality you would need especially if you are ending up in mp3. And at those rates by utilizing plugins with oversampling, you can experiment with higher sampling rates on certain processes.

In some cases it doesn't make a difference, in other cases the higher sounds cleaner, in other cases you might even prefer the character of the non oversampled one. Alot depends on the plugin design and what type of effect.


And yes for human beings, 44.1khz or 48khz is basically as good as it gets for recording sound quality.

Post Reply
  • Information
  • Who is online

    Users browsing this forum: No registered users and 24 guests