Turning off HyperThreading, less cpu usage

This forum is for discussing Reason. Questions, answers, ideas, and opinions... all apply.
User avatar
leonxx1983
Posts: 28
Joined: 02 Oct 2016

07 Jan 2018

Wow.

On my PC - i7-4790K (o/c'd to 4ghz) with 16gb RAM (Windows 10) - I have noticed in Reason 10 when I turn off the option in preferences "use hyper-threading audio rendering" there's a significant drop in cpu use on various VSTs, in particular Roland Jupiter 8 & Juno 106 which have gone from "computer too slow" warnings to 1 bar of usage on big, complex patches (with 64 sample buffer size). Switching multi-core audio on / off seems to make no difference.

Would love to hear from anyone who knows more about this. I imagine there's lots of users who have all these multi-this multi-that checkboxes checked thinking it improves performance, but seems this is not the case, quite the opposite in my case.

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

07 Jan 2018

DSP load is not CPU load, its the percentage of time needed to process the audio data in relation to the buffer size. Hyperthreading adds timing inaccuracies which can increase the time needed to process, thus leading to higher DSP load.

TLDR: If the CPU power is not actually your bottleneck then enabling hyperthreading can do more bad than good.

User avatar
leonxx1983
Posts: 28
Joined: 02 Oct 2016

07 Jan 2018

Oh yeah, good point, I talk about DSP as if it's cpu load. Either way, there's a definite performance improvement.

So what about the "multi-core audio" option - how to know whether this is helpful or not? I can't see any difference.

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

07 Jan 2018

You will only get an improvement with multi core when you actually have multiple separate channels that are not interconnected. And again, it will only give you less DSP load if the raw processing power of the CPU was the bottleneck in the first place - which it almost never is these days.

househoppin09
Posts: 536
Joined: 03 Aug 2016

08 Jan 2018

normen wrote:
07 Jan 2018
And again, it will only give you less DSP load if the raw processing power of the CPU was the bottleneck in the first place - which it almost never is these days.
I would love to see a bit of clarification/elaboration on this. If the "raw processing power of the CPU" isn't usually the bottleneck, then what is? Obviously there will sometimes be situations (particularly with very large sample-based instruments, very large numbers of audio tracks, etc.) in which RAM and/or disk speed will be the major bottleneck, but other than that, isn't the CPU pretty much the only remaining place where bottlenecks would tend to occur? I feel like, thinking of the classic situation where we add another instance of Diva and suddenly the system can no longer keep up, that would be a perfect example of running out of "raw CPU processing power", how else would you explain it?

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

08 Jan 2018

househoppin09 wrote:
08 Jan 2018
I would love to see a bit of clarification/elaboration on this. If the "raw processing power of the CPU" isn't usually the bottleneck, then what is? Obviously there will sometimes be situations (particularly with very large sample-based instruments, very large numbers of audio tracks, etc.) in which RAM and/or disk speed will be the major bottleneck, but other than that, isn't the CPU pretty much the only remaining place where bottlenecks would tend to occur? I feel like, thinking of the classic situation where we add another instance of Diva and suddenly the system can no longer keep up, that would be a perfect example of running out of "raw CPU processing power", how else would you explain it?
Counter question: If the CPU is the bottleneck, how do you explain that it doesn't run on 100%? If the CPU is the bottleneck why does a DAW run without crackles and less DSP load on larger buffer sizes?

Again, its about TIME, not power. Accessing memory or copying data over a USB bus doesn't seem like it takes a lot of time but it adds up. Then additionally people turn down their buffer size to 64 samples, lowering the base time for the processing even further. You have to realize that its only a few milliseconds we talk about, in that frame the added time for (possibly multiply) switching to a hyperthreading core can already make a difference.

househoppin09
Posts: 536
Joined: 03 Aug 2016

08 Jan 2018

Sure, but if the CPU weren't a bottleneck, upgrading the CPU while leaving everything else the same wouldn't help, right? ;)

I get what you're saying--there are other things that matter too, and more performance can often be extracted from the CPU by optimizing other things. So you're quite right to point out that, in most cases, there are other bottlenecks, the CPU isn't delivering 100% of the power that it can, etc. But to summarize that as "the CPU is almost never the issue" seems obviously wrong. Surely this depends massively on the particular use case and workload?

For example, in response to your two counter-questions, for my own part I would say that it's not the entire overall CPU load that we would expect to get pegged at 100%, but rather the load for at least one of the cores. If any one core is maxing out its capabilities, then obviously that core will be a "weakest link" of sorts, since parallelization of workload can only be done to a limited extent... right? I certainly notice that in my projects I frequently reach the point where my system is pushed to its limits and at least one CPU core is getting pegged at 100%.

And for your second counter-question, I would again point out that that's not always true. In my case, I tend to work mostly at a buffer size of 256 samples, and in my most demanding projects I don't regain any significant DSP breathing room when I increase my buffer size beyond that, contrary to your premise. Which, again, suggests that I am indeed running into a CPU bottleneck, yes?
Last edited by househoppin09 on 08 Jan 2018, edited 2 times in total.

RandyEspoda
Posts: 275
Joined: 14 Mar 2017

08 Jan 2018

I fully agree with Normen, we're basically saying similar things :

viewtopic.php?p=373316#p373316

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

08 Jan 2018

househoppin09 wrote:
08 Jan 2018
Sure, but if the CPU weren't a bottleneck, upgrading the CPU while leaving everything else the same wouldn't help, right? ;)

I get what you're saying--there are other things that matter too, and more performance can often be extracted from the CPU by optimizing other things. So you're quite right to point out that, in most cases, there are other bottlenecks, the CPU isn't delivering 100% of the power that it can, etc. But to summarize that as "the CPU is almost never the issue" seems obviously wrong. Surely this depends massively on the particular use case and workload?

For example, in response to your two counter-questions, for my own part I would say that it's not the entire overall CPU load that we would expect to get pegged at 100%, but rather the load for at least one of the cores. If any one core is maxing out its capabilities, then obviously that core will be a "weakest link" of sorts, since parallelization of workload can only be done to a limited extent... right? I certainly notice that in my projects I frequently reach the point where my system is pushed to its limits and at least one CPU core is getting pegged at 100%.

And for your second counter-question, I would again point out that that's not always true. In my case, I tend to work mostly at a buffer size of 256 samples, and in my most demanding projects I don't regain any significant DSP breathing room when I increase my buffer size beyond that, contrary to your premise. Which, again, suggests that I am indeed running into a CPU bottleneck, yes?
If the CPU is 33% of the buffer time and the rest is 66% then doubling your CPU power you can only lower the DSP load by 16%. And yes, with low buffer sizes these ratios can really shift that far. Only a third of the time used for actually computing stuff!

The answer to the last question is a bit more complicated but what I said is still true for lower buffer sizes, from a certain buffer size on other factors come into play though, thats true.

Most people go from the empirical data they have and just see that a new computer is faster / has less DSP load. They might not think about the fact that they exchanged the whole mainboard or that the new processor might have different cache sizes etc.

It truly isn't about raw processing power most of the time, especially when you're dealing with low latency processing, which frankly CPUs aren't really made for - thats why in audio the relatively low-performance DSP chips are still used. They have less raw power but can process much much quicker and with lower latency.

househoppin09
Posts: 536
Joined: 03 Aug 2016

08 Jan 2018

normen wrote:
08 Jan 2018
If the CPU is 33% of the buffer time and the rest is 66% then doubling your CPU power you can only lower the DSP load by 16%. And yes, with low buffer sizes these ratios can really shift that far. Only a third of the time used for actually computing stuff!

The answer to the last question is a bit more complicated but what I said is still true for lower buffer sizes, from a certain buffer size on other factors come into play though, thats true.

Most people go from the empirical data they have and just see that a new computer is faster / has less DSP load. They might not think about the fact that they exchanged the whole mainboard or that the new processor might have different cache sizes etc.

It truly isn't about raw processing power most of the time, especially when you're dealing with low latency processing, which frankly CPUs aren't really made for - thats why in audio the relatively low-performance DSP chips are still used. They have less raw power but can process much much quicker and with lower latency.
Once again, everything you say makes sense but I'm still left wondering "what would be an example of a spec that more frequently bottlenecks performance than the CPU?" In other words, if someone came to you and said "I keep running out of DSP in this project even at large buffer sizes, what should I consider upgrading on my machine to help with that", and assuming you didn't know any additional details of that person's situation, what other things would you think of suggesting before a CPU upgrade?

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

08 Jan 2018

househoppin09 wrote:
08 Jan 2018
Once again, everything you say makes sense but I'm still left wondering "what would be an example of a spec that more frequently bottlenecks performance than the CPU?" In other words, if someone came to you and said "I keep running out of DSP in this project even at large buffer sizes, what should I consider upgrading on my machine to help with that", and assuming you didn't know any additional details of that person's situation, what other things would you think of suggesting before a CPU upgrade?
Anything that takes time otherwise in a computer - copying data between USB bus and memory, copying memory, working off hardware interrupts etc.

The only thing you can do really is having a current motherboard and fast RAM - and not cluttering your computer with hardware you don't need like that USB ball fondler, terabyte network connection and gaming GPU :)

Edit: And obviously look into your interrupts / DPC latency, that can also eat up a lot of time, its basically the thing I'm talking about - worst case.

househoppin09
Posts: 536
Joined: 03 Aug 2016

08 Jan 2018

Okay... good food for thought. Thanks :)

househoppin09
Posts: 536
Joined: 03 Aug 2016

08 Jan 2018

Taking it back to the subject of this thread: I do find that turning HyperThreading on in Reason often buys me an increase of ~50% or sometimes even more in the number of identical (and, of course, non-interconnected) mix channels I can run before I hit my DSP limit. Which presumably suggests that, at least for me, CPU is a major bottleneck, I'd think. I didn't think my setup and working conditions were that unusual, though...

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

08 Jan 2018

househoppin09 wrote:
08 Jan 2018
Taking it back to the subject of this thread: I do find that turning HyperThreading on in Reason often buys me an increase of ~50% or sometimes even more in the number of identical (and, of course, non-interconnected) mix channels I can run before I hit my DSP limit. Which presumably suggests that, at least for me, CPU is a major bottleneck, I'd think. I didn't think my setup and working conditions were that unusual, though...
Mhm, your issue wasn't pure processing power (of one core) in that case, it was that you had multiple signal flows (i.e. channels) which could be processed in parallel, which is of course much faster than doing it sequentially. You're not only using a new core, you also use another cache etc.

househoppin09
Posts: 536
Joined: 03 Aug 2016

08 Jan 2018

Ah I see, by "pure processing power" you specifically meant per-core processing power. At a certain point it boils down to semantics, of course (i.e. if I had more per-core processing power I'd have less of a need to resort to features that parallelize the workload, etc.) but that does make sense. And a good point about the role of making more cache channels available and so on.

Of course that still means that "insufficient cores" is yet another way in which the CPU can indeed be the main limiting factor--if I had a non-HT, dual-core CPU as some users still do, I think it would matter quite a bit!

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

08 Jan 2018

househoppin09 wrote:
08 Jan 2018
Ah I see, by "pure processing power" you specifically meant per-core processing power. At a certain point it boils down to semantics, of course (i.e. if I had more per-core processing power I'd have less of a need to resort to features that parallelize the workload, etc.) but that does make sense. And a good point about the role of making more cache channels available and so on.

Of course that still means that "insufficient cores" is yet another way in which the CPU can indeed be the main limiting factor--if I had a non-HT, dual-core CPU as some users still do, I think it would matter quite a bit!
Its not endless though, the same rules apply, if you have a very low buffer size and just getting the data from the interface to memory/CPU and back into it already uses up most of that time then no matter how many cores you have you will only ever be able to decrease the load by that original amount that was due to actual computing.

househoppin09
Posts: 536
Joined: 03 Aug 2016

08 Jan 2018

Quite so. As always, it will depend on the individual use case.

User avatar
normen
Posts: 3431
Joined: 16 Jan 2015

08 Jan 2018

Right, note I am getting into details just to inform, not to tell you that you're wrong :)
househoppin09 wrote:
08 Jan 2018
(i.e. if I had more per-core processing power I'd have less of a need to resort to features that parallelize the workload, etc.)
Btw this is exactly NOT the case, which is the whole point of me differentiating here.

sot
Posts: 88
Joined: 03 May 2015

08 Jan 2018

I have hyper-threading OFF too, it works better on my cpu i5 4670k (but this cpu don't have Intel® Hyper-Threading Technology, just 4 cores 4 threads).
:reason: 10

User avatar
EnochLight
Moderator
Posts: 8405
Joined: 17 Jan 2015
Location: Imladris

08 Jan 2018

If I turn multi-core off on my machine, the vast majority of my projects can't be played back. With Hyperthreading, my larger projects get better performance. My smaller projects get worse performance.
Win 10 | Ableton Live 11 Suite |  Reason 12 | i7 3770k @ 3.5 Ghz | 16 GB RAM | RME Babyface Pro | Akai MPC Live 2 & Akai Force | Roland System 8, MX1, TB3 | Dreadbox Typhon | Korg Minilogue XD

seqoi
Posts: 417
Joined: 12 Aug 2017

09 Jan 2018

I can relate to OP findings. From my understanding it IS NOT ABOUT SPECIFIC USER CASE.

If i turn on in Reason settings "use hyper-threading audio rendering" there's a significant cpu hit and that is really Reason issue not User issue. How did i get to that conclusion? Very simple.

I get these performance drops only and i mean ONLY when there are VST instruments and VST fx plugins in Reason session.
If i turn Hepyer Threading off when there are VST plugins in Reason session i get massive CPU gain i mean everything is back to normal.

But read this: If i enable Hyper Threading while there are ONLY RackExtension and Reason native effects and instruments in session - i am getting performance increase (like it should be actually and globally). That does tell me that my system is fine, and Reason is fine.

It is that their VST implementation is bad.

Some numbers - We are speaking about i7 haswell here overclocked to 4ghz. Example i have (1024 sample buffer, 44khz) session running some VST instruments in Reason. CPU in Windows task manager is 44% tasked. If i enable HyperThreading in Reason in windows task manager, the very same second i am getting 95% of CPU utilization (wtf!?!?!) and Reason DSP meter goes to RED. As soon as i disable HyperThreading in Reason these figures drop back to 44%.

Just what the hell.

Speaking about pure RE (no VST in reason session) i can for example load 250 reverbs in session, and CPU goes to RED. But if i enable HyperThreading then i can load another 50-60 reverbs in to same session which was previously hitting RED.

That tells me that they optimized everything actually and program is normal it's just that whenever we add VST to session performance is crippled. I am politely still waiting for fix and they told me they are working on a fix.

Mind you i wasted around 4 working days and i tested and confirmed this same behavior on Desktop computer in studio, Desktop computer at home, one HP notebook, one Lenovo notebook and 3 DIFFERENT audio interfaces, two USB ones and a PCIe (Roland and Tascam + RME)- ALL of these configuration exhibit same behavior on EVERY setup or combination i tried (different sample rates or asio buffer settings).

Please don't tell me it's specific user error it is not. I also posted few very obvious figures in past where i previewed VERY different and crippled performance. I am actually thinking on recording video and adding it to youtube so everyone can see..

On top of that i was just two weeks ago reading thread on Gearslutz (i think) where people reported the very same issue. I think someone from PH replied there that they are looking into it.

Windows 10 here btw.

seqoi
Posts: 417
Joined: 12 Aug 2017

09 Jan 2018

normen wrote:
08 Jan 2018


Its not endless though, the same rules apply, if you have a very low buffer size and just getting the data from the interface to memory/CPU and back into it already uses up most of that time then no matter how many cores you have you will only ever be able to decrease the load by that original amount that was due to actual computing.
Normen what you say is 100% technically true but read my findings in above post. Whole issue is (based from my findings) down to their VST implementation. As soon as there is no VST in session Reason is behaving "like we expect it to" (in the lack of better description).

Trust me i tried from 64 samples in ASIO settings (most CPU expensive i get it) up to 2048 samples and it was always the same. Enabling Hyper Threading while there are VSTs in session is adding HEAVY cpu load. I get this same behavior on different computers/notebooks.

Needless to say i already observed DRASTIC decreased VST plugin count when compared to every other DAW out there and i posted my findings with figures in past. Thread exist. Other did same (on different forum places though)

As i said i will politely wait for a fix this year. If they do not deliver i will ask for a refund. Because i feel cheated a bit (Reason on it's own is amazing no doubt about that) they never said in their promotion (i was hooked on whole VST thing and Reason is really amazing) "we have VST now but you can not load not even 50% of what you used to load in every other DAW because VST implementation is crippled". Ok my bad i needed to try it before buying but i was hooked like everyone else in the world. LOL.

The whole thing is making me actually force to use RE's instead of VST's but there is no contest in some situations. I need VST support and i can not think in every session "oh i can load only one of this plugin because Reason performance hit error will pop up if i get more of these in session".

User avatar
Marco Raaphorst
Posts: 2504
Joined: 22 Jan 2015
Location: The Hague, The Netherlands
Contact:

09 Jan 2018

So the advice is to put Hyper Threading OFF unless you have a very old computer?

User avatar
Ahornberg
Posts: 1904
Joined: 15 Jan 2016
Location: Vienna, Austria
Contact:

09 Jan 2018

There are also VSTs that perform better when multi-core is turned off in Reason preferences (e.g. 2CAudio Kaleidoscope). My approach is to choose that setting that works best (even it is against common sense) and go with it without overthinking.

User avatar
Marco Raaphorst
Posts: 2504
Joined: 22 Jan 2015
Location: The Hague, The Netherlands
Contact:

09 Jan 2018

Ahornberg wrote:
09 Jan 2018
There are also VSTs that perform better when multi-core is turned off in Reason preferences (e.g. 2CAudio Kaleidoscope). My approach is to choose that setting that works best (even it is against common sense) and go with it without overthinking.
The weird thing is: why do we need to do that kind thinking? Why can't a computer not decide what type of processes it starts doing?

Imo there's just one problem in all cases: audio glitches. A computer has a CPU so it should use it. It should use it 'till 100% because that's the power it has. The only thing it should never do is start glitching.

I find it strange that for many decisions WE need to do the thinking. This makes computers not so smart...

Post Reply
  • Information