Reason's Multithreaded Environment

This forum is for discussing Reason. Questions, answers, ideas, and opinions... all apply.
Post Reply
User avatar
Brosefski
Posts: 86
Joined: 08 Oct 2018
Location: Up north somewhere

19 Dec 2021

Hello!

This is a very technical question and maybe it goes to deep in the woods for this kind of forum but I ran across this stack overflow discussion and this is my question: How is Reason utilizing multiple threads? What is the design of its multithreaded support and would a muilticore processor really help that much vs. just raw CPU power?

Here's the stackoverflow question:

https://sound.stackexchange.com/questio ... t-for-daws

This question is kinda old, it was asked 7 years ago and much has changed since then, but the principles I'm sure still apply. In programming, like C# world, one can just spin off threads that the main thread doesn't need to keep track of, or build out async methods to scale for concurrent processing support. Can the same be done for music processing? It seems to be semi no at least because there would be latency issues.

Anyway, maybe there's someone out there privy to the internals that can shed some light on this very interesting question (at least to me :D ).
:reason: :recycle: :re:

User avatar
dakta
Posts: 175
Joined: 30 Aug 2021

19 Dec 2021

I have no information to add but I did wonder something similar, as I don't notice from performance metric any particular utilisation of any specific core during playback so it would appear some kind of threading is utilised?

It's a good question as it's one way of taking advantage of modern hardware as IMO single thread performance has been a bit...stangnant. For example I've just built a machine with a six core Ryzen 3600 and the pc before it has a 2012 (yes 2012) i5 3570k CPU. Based on benchmarks they are....not a million miles apart for linear processing :/

User avatar
jam-s
Posts: 3069
Joined: 17 Apr 2015
Location: Aachen, Germany
Contact:

19 Dec 2021

With audio processing you usually have pipelines with source --> FX1 --> FX2 --> FX3 --> summing mixer --> output
Those pipelines have to run in sequential order (and for a live input with small batches of samples to not cause excessive latency). In this scenario multi-threading can only help if you want to run multiple (heavy) pipelines in parallel.



Also this thread here might have some detailed answers on how Reason seems to deal with many cores: viewtopic.php?f=4&t=7507685

User avatar
Brosefski
Posts: 86
Joined: 08 Oct 2018
Location: Up north somewhere

19 Dec 2021

dakta wrote:
19 Dec 2021
I have no information to add but I did wonder something similar, as I don't notice from performance metric any particular utilisation of any specific core during playback so it would appear some kind of threading is utilised?

It's a good question as it's one way of taking advantage of modern hardware as IMO single thread performance has been a bit...stangnant. For example I've just built a machine with a six core Ryzen 3600 and the pc before it has a 2012 (yes 2012) i5 3570k CPU. Based on benchmarks they are....not a million miles apart for linear processing :/
Thinking of the difference between GPU and CPU, GPU doesn't appear to add much value to single processing of audio. I notice that when I click the buttons for multi core processing in reason's preferences, I get worse performance, with static, probably latency related. I mean, does that make a lot more sense for bouncing? It seems like a maybe because it's purely digital that doesn't rely on realtime playback. Maybe in this it can spin processing off into multiple threads? Anyway, it's interesting there hasn't been too much gain for CPU processing. I hear we're getting towards the end of the bell curve where advances in classical computing will slow down, probably going into diminishing return territory now.
:reason: :recycle: :re:

User avatar
Brosefski
Posts: 86
Joined: 08 Oct 2018
Location: Up north somewhere

19 Dec 2021

jam-s wrote:
19 Dec 2021
With audio processing you usually have pipelines with source --> FX1 --> FX2 --> FX3 --> summing mixer --> output
The video you sent was pretty good and easy to understand. The relation between the CPU and DSP meter was definitely a highlight in that presentation. Although I know a bit about the basics of audio processing, I'm not an audio programmer so, I'm not sure if there's been any strides or any special use cases for audio processing or if Reason does stuff special. Side tangent I did seriously consider making a soft synth at one point. I'll peruse that post to see what information I may glean but, I think, even with just a mild dig, it appears multi-core processing is at least limited.
:reason: :recycle: :re:

User avatar
jam-s
Posts: 3069
Joined: 17 Apr 2015
Location: Aachen, Germany
Contact:

19 Dec 2021

Inside of a synth plugin you can use multi core processing to render multiple voices in parallel and quite a lot of plugins do this already. then using AVX can give you some some performance benefits (up to 4x iirc) if you take it into account during the DSP design.

Post Reply
  • Information
  • Who is online

    Users browsing this forum: Popey, taddx and 14 guests