Re: Reason 10 & above, 2 BRUTAL "benchmark" songs included
CPU Ryzen 5950X (16 cores, SMT off - similar to Intel Hyper Threading deactivated)
Overclocking => PBO boosts speed ~4.4Ghz when Reason loads the CPU
Motherboard Gigabyte Aorus PRO X570
Memory 4x8 GB Corsair 3000 MHz CL15
SSD Samsung EVO 850 (Windows10, ver. 2004)
SSD ADATA SU800 (Reason 10)
Video Asus RTX2080
Monitor LG G-Sync 144Hz
Audio Interface RME Fireface 400
sample rate 44100 Hz ,
buffer 1024 samples
Results:
Brutal #1 (original)
With Render Audio using audio card buffer size setting = OFF => plays ~ 1min 18 seconds perfect, then clicks and pops
With Render Audio using audio card buffer size setting = ON => plays the whole song no clicks and I don't notice additional GUI lag - which Is already pretty high.
Brutal #2 (see first post in this topic to download file)
First crackle @ 3min 1 second - transport bar extremely laggy
Overclocking => PBO boosts speed ~4.4Ghz when Reason loads the CPU
Motherboard Gigabyte Aorus PRO X570
Memory 4x8 GB Corsair 3000 MHz CL15
SSD Samsung EVO 850 (Windows10, ver. 2004)
SSD ADATA SU800 (Reason 10)
Video Asus RTX2080
Monitor LG G-Sync 144Hz
Audio Interface RME Fireface 400
sample rate 44100 Hz ,
buffer 1024 samples
Results:
Brutal #1 (original)
With Render Audio using audio card buffer size setting = OFF => plays ~ 1min 18 seconds perfect, then clicks and pops
With Render Audio using audio card buffer size setting = ON => plays the whole song no clicks and I don't notice additional GUI lag - which Is already pretty high.
Brutal #2 (see first post in this topic to download file)
First crackle @ 3min 1 second - transport bar extremely laggy
M1 MacBook Pro (8GB RAM) running Reason under Rosetta translation. Crackles start occurring at about 5 seconds. Looking at Activity Monitor it seems Rosetta only allows translated programs to use 4 of the 8 cores and this project shows Reason utilizing 15GB of RAM. Half of that must be swapped to the SSD as virtual memory since this config only has 8gb. Fun! It definitely was brought to its knees.
By contrast my 2019 8c 3.6GHz iMac from a few years ago reaches the 2min point before crackling. It's interesting because when running a Logic Pro stress test benchmark the M1 gets within 90% of the i9 9900K results. I'll be curious to see how much of a performance uplift there will be when Reason is natively optimized.
By contrast my 2019 8c 3.6GHz iMac from a few years ago reaches the 2min point before crackling. It's interesting because when running a Logic Pro stress test benchmark the M1 gets within 90% of the i9 9900K results. I'll be curious to see how much of a performance uplift there will be when Reason is natively optimized.
Music is nothing else but wild sounds civilized into time and tune.
I presume you only tested with 1st project, right? Can you also test with Brutal#2?tronam wrote: ↑05 Jan 2021M1 MacBook Pro (8GB RAM) running Reason under Rosetta translation. Crackles start occurring at about 5 seconds. Looking at Activity Monitor it seems Rosetta only allows translated programs to use 4 of the 8 cores and this project shows Reason utilizing 15GB of RAM. Half of that must be swapped to the SSD as virtual memory since this config only has 8gb. Fun! It definitely was brought to its knees.
By contrast my 2019 8c 3.6GHz iMac from a few years ago reaches the 2min point before crackling. It's interesting because when running a Logic Pro stress test benchmark the M1 gets within 90% of the i9 9900K results. I'll be curious to see how much of a performance uplift there will be when Reason is natively optimized.
Since getting a Mac Studio I've been curious to see how it compares to my previous baseline 2020 M1 MacBook Pro which barely managed 5 seconds in the first benchmark. At least now it reaches 1min 30sec before hearing occasional stutters, but overall CPU utilization is barely exceeding 30%. Reason is clearly struggling to properly multithread on M1 because in both benchmarks the stutters begin just as the first 2 performance cores start maxing out while it massively underutilizes all the others. Even the memory utilization is pretty small at just 12GB, yet these projects take minutes to load and navigating the rack is incredibly laggy nearing 1-2fps. With so much available GPU power and high bandwidth unified memory it's surprising to see it perform this poorly, even under Rosetta translation. All the other DAWs fared much better in that respect. When Reason is finally M1 native I'm sure we'll see a dramatic uplift in performance, but for now it's still a bit crippled outside of smaller, less demanding projects.
Music is nothing else but wild sounds civilized into time and tune.
This is actually quite disappointing news - my brother recently got a M1 mini and was amazed by how it performed in comparison to his old MacBook Pro.tronam wrote: ↑03 Apr 2022Since getting a Mac Studio I've been curious to see how it compares to my previous baseline 2020 M1 MacBook Pro which barely managed 5 seconds in the first benchmark. At least now it reaches 1min 30sec before hearing occasional stutters, but overall CPU utilization is barely exceeding 30%. Reason is clearly struggling to properly multithread on M1 because in both benchmarks the stutters begin just as the first 2 performance cores start maxing out while it massively underutilizes all the others. Even the memory utilization is pretty small at just 12GB, yet these projects take minutes to load and navigating the rack is incredibly laggy nearing 1-2fps. With so much available GPU power and high bandwidth unified memory it's surprising to see it perform this poorly, even under Rosetta translation. All the other DAWs fared much better in that respect. When Reason is finally M1 native I'm sure we'll see a dramatic uplift in performance, but for now it's still a bit crippled outside of smaller, less demanding projects.
He had Live projects that killed his MBP but performed well on the mini, running at about 25% and after the native build of Live 11 got released the performance was even better at about 17%
I'm assuming that you are using R12?
Did you try earlier versions? R10/11?
Yeah, it's R12. I no longer have R10/11 installed, but I would expect them to perform similarly. At least we can rest assured knowing it will only get better moving forward. It's just a shame the wait will be another 3-6 months.Billy+ wrote: ↑03 Apr 2022This is actually quite disappointing news - my brother recently got a M1 mini and was amazed by how it performed in comparison to his old MacBook Pro.tronam wrote: ↑03 Apr 2022Since getting a Mac Studio I've been curious to see how it compares to my previous baseline 2020 M1 MacBook Pro which barely managed 5 seconds in the first benchmark. At least now it reaches 1min 30sec before hearing occasional stutters, but overall CPU utilization is barely exceeding 30%. Reason is clearly struggling to properly multithread on M1 because in both benchmarks the stutters begin just as the first 2 performance cores start maxing out while it massively underutilizes all the others. Even the memory utilization is pretty small at just 12GB, yet these projects take minutes to load and navigating the rack is incredibly laggy nearing 1-2fps. With so much available GPU power and high bandwidth unified memory it's surprising to see it perform this poorly, even under Rosetta translation. All the other DAWs fared much better in that respect. When Reason is finally M1 native I'm sure we'll see a dramatic uplift in performance, but for now it's still a bit crippled outside of smaller, less demanding projects.
He had Live projects that killed his MBP but performed well on the mini, running at about 25% and after the native build of Live 11 got released the performance was even better at about 17%
I'm assuming that you are using R12?
Did you try earlier versions? R10/11?
Music is nothing else but wild sounds civilized into time and tune.
I'm not so sure about the time line (3-6 months) initially it was "planned" for January now it's before the end of the 2022 so that's 8 months and I'm betting it will definitely need some improvements passed the initial release and let's not forget about the current multi core options that work better when turned off.
I recently built a new PC because the one I built in 2016 (AMD FX8370) just never did perform well. I tried everything (including most of the common mid-range (~$500) rackmount audio interfaces) and it still sucked.
I decided to try out these benchmark tests and thought I'd post my results.
But first: Why are we benchmarking with such high latency?
In my experience any crappy computer/sound card can perform pretty well at 44.1 and 25ms worth of latency. Why is the bar set so low? Wouldn't it be a better gauge of a computer/interfaces performance capabilities if we tested at high quality settings so people would know "ok that combination of hardware can do the highest quality at lowest latency, so it can definitely work for me".
If you're doing any live recording, you need 5ms or less. Because that's the point where the latency starts to become noticeable when trying to play along with the recording. And THAT is where crappy computers/sound cards will shit the bed. That, therefore should be the benchmark.
Furthermore, in order to max out systems at such easy peasy low quality settings, your benchmark files have become these giant monstrosities that will nearly crash a computer just by loading the file, whereas if you tested at higher quality settings/low buffer settings, you'll hit the performance ceiling much faster and thus require much less "brutality" in terms of your benchmark files.
-
Anyway, I never could get what I wanted out of my AMD FX8370 build (always been an AMD guy) and I decided the issue must lie somewhere in the chipset. I figured I'd just go build an Intel system (for the first time ever) and see how it compares. I did zero research ahead of time. I just went to newegg like I usually do and found the 2nd best Intel CPU I could find and built everything else around it. I later found that the internet is shitting all over this choice and sure, they're right. The AMD probably runs circles around this CPU. Oh well, don't care.
CPU: Intel i7-11700K
RAM: CORSAIR Vengeance LPX 32GB DDR4 3200 (PC4 25600)
Interface: Presonus 1824C (NOTE: same results using an M-Audio M-Track Eight)
MOBO: GIGABYTE Z590 AORUS ELITE
SSD: Intel 670p Series M.2 2280 512GB PCIe NVMe 3.0 x4 QLC
Regardless of whether I used Brutal #1 or Brutal #2 I got the same results. Also, the "Render audio using audio card settings" made no difference.
40 seconds before clicks and pops on either file (44.1k @1024).
-
TAKEAWAY:
What I learned from this experience is the audio interface you use makes almost NO DIFFERENCE if your computer just can't hang for whatever reason. On my FX8370 build, I tried the Mackie Onyx Blackbird, the Focusrite 18i20, the M-Audio M-Track Eight and the Presonus 1824c..... and NONE OF THEM could get me to the point of high quality with low latency. It was MADDENING trying all these different interfaces and all the different OS optimization tricks with zero success.
It wasn't until I built a new PC that all of a sudden the two interfaces I have left (Presonus and M-Audio) both work perfectly fine at highest resolution/lowest latency.
In summary, this i7-11700K build absolutely kicks ass. It is light years ahead of the FX8370 PC I built only 6 years ago, whereas my Windows XP machine that I built in 2006 could to 24bit /96K with no latency ALL DAY LONG using a Delta 1010lt sound card. The only reason I ditched the XP machine was because Reason 8 IIRC stopped supporting XP. I only just retired my WINXP computer from file server duties a couple weeks ago and it was a very sad day because the old girl still worked fine and never let me down.
I decided to try out these benchmark tests and thought I'd post my results.
But first: Why are we benchmarking with such high latency?
In my experience any crappy computer/sound card can perform pretty well at 44.1 and 25ms worth of latency. Why is the bar set so low? Wouldn't it be a better gauge of a computer/interfaces performance capabilities if we tested at high quality settings so people would know "ok that combination of hardware can do the highest quality at lowest latency, so it can definitely work for me".
If you're doing any live recording, you need 5ms or less. Because that's the point where the latency starts to become noticeable when trying to play along with the recording. And THAT is where crappy computers/sound cards will shit the bed. That, therefore should be the benchmark.
Furthermore, in order to max out systems at such easy peasy low quality settings, your benchmark files have become these giant monstrosities that will nearly crash a computer just by loading the file, whereas if you tested at higher quality settings/low buffer settings, you'll hit the performance ceiling much faster and thus require much less "brutality" in terms of your benchmark files.
-
Anyway, I never could get what I wanted out of my AMD FX8370 build (always been an AMD guy) and I decided the issue must lie somewhere in the chipset. I figured I'd just go build an Intel system (for the first time ever) and see how it compares. I did zero research ahead of time. I just went to newegg like I usually do and found the 2nd best Intel CPU I could find and built everything else around it. I later found that the internet is shitting all over this choice and sure, they're right. The AMD probably runs circles around this CPU. Oh well, don't care.
CPU: Intel i7-11700K
RAM: CORSAIR Vengeance LPX 32GB DDR4 3200 (PC4 25600)
Interface: Presonus 1824C (NOTE: same results using an M-Audio M-Track Eight)
MOBO: GIGABYTE Z590 AORUS ELITE
SSD: Intel 670p Series M.2 2280 512GB PCIe NVMe 3.0 x4 QLC
Regardless of whether I used Brutal #1 or Brutal #2 I got the same results. Also, the "Render audio using audio card settings" made no difference.
40 seconds before clicks and pops on either file (44.1k @1024).
-
TAKEAWAY:
What I learned from this experience is the audio interface you use makes almost NO DIFFERENCE if your computer just can't hang for whatever reason. On my FX8370 build, I tried the Mackie Onyx Blackbird, the Focusrite 18i20, the M-Audio M-Track Eight and the Presonus 1824c..... and NONE OF THEM could get me to the point of high quality with low latency. It was MADDENING trying all these different interfaces and all the different OS optimization tricks with zero success.
It wasn't until I built a new PC that all of a sudden the two interfaces I have left (Presonus and M-Audio) both work perfectly fine at highest resolution/lowest latency.
In summary, this i7-11700K build absolutely kicks ass. It is light years ahead of the FX8370 PC I built only 6 years ago, whereas my Windows XP machine that I built in 2006 could to 24bit /96K with no latency ALL DAY LONG using a Delta 1010lt sound card. The only reason I ditched the XP machine was because Reason 8 IIRC stopped supporting XP. I only just retired my WINXP computer from file server duties a couple weeks ago and it was a very sad day because the old girl still worked fine and never let me down.
So after almost 8 years with my faithfull I7 4790k, I just built a new rig for reason:
CPU: Intel i5-12600K
RAM: TEAM GROUP 32 GB DDR4 3200MHz Dark Z Red CL16
Interface: Internal Asus mboard card Dx driver @ 1024 (really new build)
MOBO: ASUS PRIME Z690 P D4
SSD: Kingston NV1 500 GB M.2 NVME 2280
In ran the second one (Brutal 2) until first crackles at 3:20 min, no HT.
Anyway this is a really new build, i still have lots of tweaks to do, like activating XMP for faster ram, windows tweaks, but this looks very promissing as I don't anticipate having a worse performance with my RME Raydat's drivers!
EDIT 1:
With HT enabled, first dropouts appeared at 4:01. I already had good experiences with HT enabled and my I7 4790k!
Amazing! More tests to come, but man Intel's 12th gen is amazing!º
Cheers,
MC
CPU: Intel i5-12600K
RAM: TEAM GROUP 32 GB DDR4 3200MHz Dark Z Red CL16
Interface: Internal Asus mboard card Dx driver @ 1024 (really new build)
MOBO: ASUS PRIME Z690 P D4
SSD: Kingston NV1 500 GB M.2 NVME 2280
In ran the second one (Brutal 2) until first crackles at 3:20 min, no HT.
Anyway this is a really new build, i still have lots of tweaks to do, like activating XMP for faster ram, windows tweaks, but this looks very promissing as I don't anticipate having a worse performance with my RME Raydat's drivers!
EDIT 1:
With HT enabled, first dropouts appeared at 4:01. I already had good experiences with HT enabled and my I7 4790k!
Amazing! More tests to come, but man Intel's 12th gen is amazing!º
Cheers,
MC
Looks like an affordable build. Would be interested to know your cpu usage and power consumption if possible.mcatalao wrote: ↑31 Aug 2022So after almost 8 years with my faithfull I7 4790k, I just built a new rig for reason:
CPU: Intel i5-12600K
RAM: TEAM GROUP 32 GB DDR4 3200MHz Dark Z Red CL16
Interface: Internal Asus mboard card Dx driver @ 1024 (really new build)
MOBO: ASUS PRIME Z690 P D4
SSD: Kingston NV1 500 GB M.2 NVME 2280
In ran the second one (Brutal 2) until first crackles at 3:20 min, no HT.
Anyway this is a really new build, i still have lots of tweaks to do, like activating XMP for faster ram, windows tweaks, but this looks very promissing as I don't anticipate having a worse performance with my RME Raydat's drivers!
EDIT 1:
With HT enabled, first dropouts appeared at 4:01. I already had good experiences with HT enabled and my I7 4790k!
Amazing! More tests to come, but man Intel's 12th gen is amazing!º
Cheers,
MC
I am looking to get as low power as possible. Anyone know if the 12600 65w will perform similar?
My results:
The Specs:
Dell Precision 3640 MT
Intel(R) Core(TM) i9-10900K 10. Gen. 10 Cores CPU @ 3.70GHz 3.70 GHz - Turbo 5,3Ghz
Dell Inc. 0D4MD1 (U3E1) Motherboard,
Samsung SSD 2Tb Harddrive,
Seagate Expansion Portable 1 Tt External Harddrive
16Gb DDR4 Ram
Asus Monitor PA278QV (2560x1440@59Hz)
Intel UHD Graphics Card 630 (Dell)
Windows 10 Pro 64 Bit
Mackie Onyx Producer 2x2 Audio Interface
-------------------------------------------------------------------
2048 Samples without hyperthreading: 1:46 minutes
2048 Samples with hyperthreading on: 2:22 minutes
By the way, when the hyperthreading is ON, the graphics lag during all the song long.
The Specs:
Dell Precision 3640 MT
Intel(R) Core(TM) i9-10900K 10. Gen. 10 Cores CPU @ 3.70GHz 3.70 GHz - Turbo 5,3Ghz
Dell Inc. 0D4MD1 (U3E1) Motherboard,
Samsung SSD 2Tb Harddrive,
Seagate Expansion Portable 1 Tt External Harddrive
16Gb DDR4 Ram
Asus Monitor PA278QV (2560x1440@59Hz)
Intel UHD Graphics Card 630 (Dell)
Windows 10 Pro 64 Bit
Mackie Onyx Producer 2x2 Audio Interface
-------------------------------------------------------------------
2048 Samples without hyperthreading: 1:46 minutes
2048 Samples with hyperthreading on: 2:22 minutes
By the way, when the hyperthreading is ON, the graphics lag during all the song long.
The purpose of this test is to take the system to it's max. So it will load up to 100% if you let it go.
I can't tell you my cpu usage on normal projects with this machine yet as i said its a new machine that i'm still tweaking and testing, but I'm sure it will be very low because i have a I7 4790k that still runs well today, appart for projects with lots of VST's, specially Ozone and such. I did a passmark test of this cpu and it got a 28k plus score against 8k on the i7 4790k. Things don't scale directly but i had a project that didnt run (full of respire synths) and had a bunch of stuf converted to audio, and this machine does not pass 20% cpu and 2 dsp bars. So theres that!
Because of that, I wanted a stronger machine, and this was a good option.
There are some important differences between the I5 12600 and the I5 12600k. The 12600 k is a 6p + 4e cpu, wich means it has 6 performance processors, and 4 efficient processors. So in the end you have a 6 core, 12 thread CPU, insted of a 10 core 16 thread CPU. IMHO, at 60 eur difference, i wouldn't get it, unless you really need to create a small machine with low power consumption. Another aspect is the processor frequency, the 12600 is slower than the 12600k. So worse performance beause of less cores, and lower single thread performance. Finally the 12600k is unlocked, so you can get some more juice from the cpu if you OC. You won't be able to OC the 12600.
I guess it is not a bad CPU, but don't let the similar code fool you, they are completely different beasts. You have any reason to go for low power? I my case, i can only work in my studio 3 to 4 hours a day, medium. The price difference between the two is marginal on the whloe amount (plus, everything else you have on the pc will add to the bill).
About pricing, i spent around 760 eur. Stuff is a bit more expensive in this side of the pond. I could have saved some money, like putting only 16 GB ram, and i could reuse a bunch of the old pc's stuff so i wouldn't need to buy a power source (i had to change the PSU and put a 700 w on it so it would work with this pc), ssd's, and the case. I'd probably reduce the price by 150 to 200 eur, but i want to keep that machine as a backup. If shit happens, i just move my audio card from a pc to another, and i can keep doing my stuff. But this build was not to expensive, ended under 800 eur. Hope it lasts as long as the previous one!
Thanks for the information, you make a good argument and the k is tempting but still unsure. I know it may sound crazy but times are tough and currently trying to cut all my unnecessary bills. I was initially going to go for a cheap 2010 Mac Pro but I see they Max out about 300 watts and idle over 100.mcatalao wrote: ↑01 Sep 2022The purpose of this test is to take the system to it's max. So it will load up to 100% if you let it go.
I can't tell you my cpu usage on normal projects with this machine yet as i said its a new machine that i'm still tweaking and testing, but I'm sure it will be very low because i have a I7 4790k that still runs well today, appart for projects with lots of VST's, specially Ozone and such. I did a passmark test of this cpu and it got a 28k plus score against 8k on the i7 4790k. Things don't scale directly but i had a project that didnt run (full of respire synths) and had a bunch of stuf converted to audio, and this machine does not pass 20% cpu and 2 dsp bars. So theres that!
Because of that, I wanted a stronger machine, and this was a good option.
There are some important differences between the I5 12600 and the I5 12600k. The 12600 k is a 6p + 4e cpu, wich means it has 6 performance processors, and 4 efficient processors. So in the end you have a 6 core, 12 thread CPU, insted of a 10 core 16 thread CPU. IMHO, at 60 eur difference, i wouldn't get it, unless you really need to create a small machine with low power consumption. Another aspect is the processor frequency, the 12600 is slower than the 12600k. So worse performance beause of less cores, and lower single thread performance. Finally the 12600k is unlocked, so you can get some more juice from the cpu if you OC. You won't be able to OC the 12600.
I guess it is not a bad CPU, but don't let the similar code fool you, they are completely different beasts. You have any reason to go for low power? I my case, i can only work in my studio 3 to 4 hours a day, medium. The price difference between the two is marginal on the whloe amount (plus, everything else you have on the pc will add to the bill).
About pricing, i spent around 760 eur. Stuff is a bit more expensive in this side of the pond. I could have saved some money, like putting only 16 GB ram, and i could reuse a bunch of the old pc's stuff so i wouldn't need to buy a power source (i had to change the PSU and put a 700 w on it so it would work with this pc), ssd's, and the case. I'd probably reduce the price by 150 to 200 eur, but i want to keep that machine as a backup. If shit happens, i just move my audio card from a pc to another, and i can keep doing my stuff. But this build was not to expensive, ended under 800 eur. Hope it lasts as long as the previous one!
I wasted £2500 on this 2015 i7 Macbook I am using now and have never been happy with it so want to do a bit of research before assuming a new Mac will do the job.
A sticky post should be made about recommended systems or systems to avoid.
If there is anyway you can test for power consumption and cpu usage next test , would be helpful. Thanks
Ok but seriously I must be missing something. Why are we benchmarking at such a high/unusable latency? Who decided 25ms is a good test point? What good is the data we're extrapolating from these tests when anything above 5ms can't be played along with? If my computer/soundcard couldn't do better than 25ms, it would be completely useless to me.
Are you a drummer? Because if you are, then I agree with you, drummers, have a connection with time that makes them feel very small time gaps.
If not... you must be confusing the values, 25 ms is a good latency even for midi playing. Latency above 40 ms is quite nasty to play, though!
For piano i don't have a problem with 25 ms, to input midi with an EWI i like to play under 10ms. I only pull the machine to lower buffers when recording real audio and monitoring with the software. Hopefully I'll be able to to go under 3ms with this machine, but generally speaking I'm quite ok with 512 or 1024 samples.
That being said, for mixing, the 25 ms and 1024 sample has been prety much a standard! At 2048 as Hagen had, you start to see the sequencer move before sound arrives to your monitors.
PS.: Mind also when you start mixing full latency goes havoc. So you should have delay compensation on, and if you need to record stuff back, even if you have 0 latency on your card, you need to bypass stuff because it will have delays all over. And on some more complex plugs, this delay gets noticeable and it is not intrinsic to your setup. Generally speaking you will need to disable the biggest offenders (for example mastering plugs in the master channel).
If not... you must be confusing the values, 25 ms is a good latency even for midi playing. Latency above 40 ms is quite nasty to play, though!
For piano i don't have a problem with 25 ms, to input midi with an EWI i like to play under 10ms. I only pull the machine to lower buffers when recording real audio and monitoring with the software. Hopefully I'll be able to to go under 3ms with this machine, but generally speaking I'm quite ok with 512 or 1024 samples.
That being said, for mixing, the 25 ms and 1024 sample has been prety much a standard! At 2048 as Hagen had, you start to see the sequencer move before sound arrives to your monitors.
PS.: Mind also when you start mixing full latency goes havoc. So you should have delay compensation on, and if you need to record stuff back, even if you have 0 latency on your card, you need to bypass stuff because it will have delays all over. And on some more complex plugs, this delay gets noticeable and it is not intrinsic to your setup. Generally speaking you will need to disable the biggest offenders (for example mastering plugs in the master channel).
I understand you, but it might be more logical to cut on gas expenses as things go since we're having such a crash in the market. But we're talking about 10 eur/year. If you want to cut your bills, swich your lights off, kick your kids butt to take smaller baths, use your car less, or invest in solar and heat pumps. What i mean is, you should cut on stuff that matter, not on stuff that are marginal and take you 20 to 30% perfomance on you computer.Fusion wrote: ↑02 Sep 2022
Thanks for the information, you make a good argument and the k is tempting but still unsure. I know it may sound crazy but times are tough and currently trying to cut all my unnecessary bills. I was initially going to go for a cheap 2010 Mac Pro but I see they Max out about 300 watts and idle over 100.
I wasted £2500 on this 2015 i7 Macbook I am using now and have never been happy with it so want to do a bit of research before assuming a new Mac will do the job.
A sticky post should be made about recommended systems or systems to avoid.
If there is anyway you can test for power consumption and cpu usage next test , would be helpful. Thanks
Thought it was time to revisit this subject, having just got a new computer myself (Mac Studio Max) and with the recent Reason 12.2.9 update.
One thing that doesn't feel 'right' SUPER long load times (need to get to the bottom of this). The first song took 2:26 to load, which was longer than it could play without errors (1:48 for the first song in the default settings, 1024 buffer, 44.1 kHz sample rate).
The second was another story all together, taking over 30 minutes to load (yes you read that correctly)! Once if finished loading I was surprised to be able to play it LONGER than the first version clocking in around 2:13 before the first audible error, although on a song like this it's a bit difficult to tell for sure.
While it would be far more boring to listen to, the ideal test would be a single frequency sine wave which would make detecting any errors SUPER easy.
In fact, all you have to do is add a synth (Europa/Complex1 both work well) and generate a sustained sine wave, the solo it and hit play! The length of time before hitting an error appears to be similar best I can tell. And despite me worrying it would present a lighter load on the CPU, it either made it's first error 10 seconds SOONER, OR (more likely) the first error was 10 seconds earlier in both cases but I couldn't hear it in the music track (the FX didn't help, wasn't sure if I was hearing effects or CPU glitch!). I doubt (but leave open the possibility) that adding a single track and having it soloed added THAT much of a higher higher CPU hit, so I'm leaning towards the second explanation above.
Now for the fun part, what happens when I change the Max Audio Threads from Default to 8 (max on my system) I get the same play time. At 7 my first error comes at 1:42. At 6 I only get to 1:27 and I feel I can safely assume it goes down from there. So I'll likely be leaving it at the default setting for the near future unless I find some use case where lowering it helps other processes.
I am now curious how much better things will fare with an M1 native build, as they are already more than sufficient for my larger projects! Always nice to have power to spare…
One thing that doesn't feel 'right' SUPER long load times (need to get to the bottom of this). The first song took 2:26 to load, which was longer than it could play without errors (1:48 for the first song in the default settings, 1024 buffer, 44.1 kHz sample rate).
The second was another story all together, taking over 30 minutes to load (yes you read that correctly)! Once if finished loading I was surprised to be able to play it LONGER than the first version clocking in around 2:13 before the first audible error, although on a song like this it's a bit difficult to tell for sure.
While it would be far more boring to listen to, the ideal test would be a single frequency sine wave which would make detecting any errors SUPER easy.
In fact, all you have to do is add a synth (Europa/Complex1 both work well) and generate a sustained sine wave, the solo it and hit play! The length of time before hitting an error appears to be similar best I can tell. And despite me worrying it would present a lighter load on the CPU, it either made it's first error 10 seconds SOONER, OR (more likely) the first error was 10 seconds earlier in both cases but I couldn't hear it in the music track (the FX didn't help, wasn't sure if I was hearing effects or CPU glitch!). I doubt (but leave open the possibility) that adding a single track and having it soloed added THAT much of a higher higher CPU hit, so I'm leaning towards the second explanation above.
Now for the fun part, what happens when I change the Max Audio Threads from Default to 8 (max on my system) I get the same play time. At 7 my first error comes at 1:42. At 6 I only get to 1:27 and I feel I can safely assume it goes down from there. So I'll likely be leaving it at the default setting for the near future unless I find some use case where lowering it helps other processes.
I am now curious how much better things will fare with an M1 native build, as they are already more than sufficient for my larger projects! Always nice to have power to spare…
Selig Audio, LLC
I brought up a similar point in my post a few above yours which nobody noticed/responded to.selig wrote: ↑17 Oct 2022
One thing that doesn't feel 'right' SUPER long load times (need to get to the bottom of this). The first song took 2:26 to load, which was longer than it could play without errors (1:48 for the first song in the default settings, 1024 buffer, 44.1 kHz sample rate).
The second was another story all together, taking over 30 minutes to load (yes you read that correctly)! Once if finished loading I was surprised to be able to play it LONGER than the first version clocking in around 2:13 before the first audible error, although on a song like this it's a bit difficult to tell for sure.
While it would be far more boring to listen to, the ideal test would be a single frequency sine wave which would make detecting any errors SUPER easy.
In fact, all you have to do is add a synth (Europa/Complex1 both work well) and generate a sustained sine wave, the solo it and hit play! The length of time before hitting an error appears to be similar best I can tell. And despite me worrying it would present a lighter load on the CPU, it either made it's first error 10 seconds SOONER, OR (more likely) the first error was 10 seconds earlier in both cases but I couldn't hear it in the music track (the FX didn't help, wasn't sure if I was hearing effects or CPU glitch!). I doubt (but leave open the possibility) that adding a single track and having it soloed added THAT much of a higher higher CPU hit, so I'm leaning towards the second explanation above.
The reason these benchmark test files are such ridiculous large monstrosities is because that's what it takes to stress a system using the ridiculously low/unusable latency of 25ms..
You pointed out one simple way of doing a better test. I had a different suggestion which was setting the benchmark to an actually usable latency. Either way would get us to a better understanding of how a system will perform without these monstrosities. As it is, these current benchmarks are completely useless to me or, IMO, for anyone who needs to see how well a given system will record at real world live recording latencies. In my experience, 5ms is the maximum latency when trying to play/record something live. 25ms tells me nothing useful about what a system is capable of as I ranted about and quoted below.
sublunar wrote: ↑25 Aug 2022But first: Why are we benchmarking with such high latency?
In my experience any crappy computer/sound card can perform pretty well at 44.1 and 25ms worth of latency. Why is the bar set so low? Wouldn't it be a better gauge of a computer/interfaces performance capabilities if we tested at high quality settings so people would know "ok that combination of hardware can do the highest quality at lowest latency, so it can definitely work for me".
If you're doing any live recording, you need 5ms or less. Because that's the point where the latency starts to become noticeable when trying to play along with the recording. And THAT is where crappy computers/sound cards will shit the bed. That, therefore should be the benchmark.
Furthermore, in order to max out systems at such easy peasy low quality settings, your benchmark files have become these giant monstrosities that will nearly crash a computer just by loading the file, whereas if you tested at higher quality settings/low buffer settings, you'll hit the performance ceiling much faster and thus require much less "brutality" in terms of your benchmark files.
I don't know enough about how to create a real world test, but I too wondered why a high buffer - just means it takes that much longer to get to "100%"! ;(sublunar wrote: ↑18 Oct 2022I brought up a similar point in my post a few above yours which nobody noticed/responded to.
The reason these benchmark test files are such ridiculous large monstrosities is because that's what it takes to stress a system using the ridiculously low/unusable latency of 25ms..
You pointed out one simple way of doing a better test. I had a different suggestion which was setting the benchmark to an actually usable latency. Either way would get us to a better understanding of how a system will perform without these monstrosities. As it is, these current benchmarks are completely useless to me or, IMO, for anyone who needs to see how well a given system will record at real world live recording latencies. In my experience, 5ms is the maximum latency when trying to play/record something live. 25ms tells me nothing useful about what a system is capable of as I ranted about and quoted below.
My suggestion wasn't about how to construct a proper test, it was how to best judge the breaking point accurately (which would apply to any chosen test method).
Selig Audio, LLC
Selig, does this not make you want to just build your own system for a fraction of the price? considering the i5 above did a lot better.selig wrote: ↑19 Oct 2022I don't know enough about how to create a real world test, but I too wondered why a high buffer - just means it takes that much longer to get to "100%"! ;(sublunar wrote: ↑18 Oct 2022I brought up a similar point in my post a few above yours which nobody noticed/responded to.
The reason these benchmark test files are such ridiculous large monstrosities is because that's what it takes to stress a system using the ridiculously low/unusable latency of 25ms..
You pointed out one simple way of doing a better test. I had a different suggestion which was setting the benchmark to an actually usable latency. Either way would get us to a better understanding of how a system will perform without these monstrosities. As it is, these current benchmarks are completely useless to me or, IMO, for anyone who needs to see how well a given system will record at real world live recording latencies. In my experience, 5ms is the maximum latency when trying to play/record something live. 25ms tells me nothing useful about what a system is capable of as I ranted about and quoted below.
My suggestion wasn't about how to construct a proper test, it was how to best judge the breaking point accurately (which would apply to any chosen test method).
Seems everyone if saying how good the new M1/2 is but real world tests seem to show they are not ready yet.
Fusion wrote: ↑20 Oct 2022Selig, does this not make you want to just build your own system for a fraction of the price? considering the i5 above did a lot better.selig wrote: ↑19 Oct 2022
I don't know enough about how to create a real world test, but I too wondered why a high buffer - just means it takes that much longer to get to "100%"! ;(
My suggestion wasn't about how to construct a proper test, it was how to best judge the breaking point accurately (which would apply to any chosen test method).
Seems everyone if saying how good the new M1/2 is but real world tests seem to show they are not ready yet.
If everyone says the new M1 is really good, there must be some truth behind it...
Power draw of 20 Watts when running a Reason project with tons of plugins, midi, ext audio in-out processing.
Complete silent operation.
The faster SSD time access you can pull of any pc build out there that I know of.
Core Audio and not a single driver failure when running 10+ Midi-usb external Synths or hardware controllers, or a 20 y.o Motu Ultralite.
4K Display support out of the box in both HDMI and Display Port.
Tons of Native M1 Vst's, AU, and DAWS already there such as Ableton, Logic, Garage Band, Digital Performer 11...
1/4th of the price of a similar spec PC custom built... (I got mine for 500 eurs)
Time compression for videos in Handbrake down to seconds for large video projects, where's on my 24 cores PC was minutes...
My 24 cores PC still destroys my M1 mini in Reason (with no drivers and ext usb audio), but latest update shrank the gap significantly, and it is still running under Rosetta...
and I have the basic 8gb Mac mini... what others real world figures do you need???
I build songs, not computers - and what if that “fraction” you mention was 2/1 the price (fractions work in both directions!). I also want to run the OS of my choice which is MAC OS.Fusion wrote: ↑20 Oct 2022Selig, does this not make you want to just build your own system for a fraction of the price? considering the i5 above did a lot better.selig wrote: ↑19 Oct 2022
I don't know enough about how to create a real world test, but I too wondered why a high buffer - just means it takes that much longer to get to "100%"! ;(
My suggestion wasn't about how to construct a proper test, it was how to best judge the breaking point accurately (which would apply to any chosen test method).
Seems everyone if saying how good the new M1/2 is but real world tests seem to show they are not ready yet.
We don’t yet know how good the M1 is with Reason because Reason is not yet optimized for M1.
I would suggest a real world test would be to test the M1 with M1 optimized software.
For the record, my post was saying how good the new version of Reason is, not how good the M1 chip is…jury is still out on that (see above)!
Selig Audio, LLC
I think the context for 1024 samples is that it is a good value for mixing, and inputing midi with a mouse then mixing it. Also, I usually record piano and other synths at 1024 samples with no issues (however when playing EWI i prefer to work at least at 256, and around 6 ms), but for me and for most of my midi recording, 25 ms is not that slow. I usually try to work with lower latency values when recording audio and monitoring, but other than that 1024 is a good balance between stability, performance and speed.selig wrote: ↑19 Oct 2022(...)
I don't know enough about how to create a real world test, but I too wondered why a high buffer - just means it takes that much longer to get to "100%"! ;(
My suggestion wasn't about how to construct a proper test, it was how to best judge the breaking point accurately (which would apply to any chosen test method).
TBH i think it is a good common denominator for multiple applications. And it is also important that people test at same settings since reason 10.4 since the sample buffer affects more performance in Reason than in previous versions.
Anyway, i too think the effects added are not helpfull on checking out the breaking point. I had to do several tests and know the type of sound just to decide if the project was breaking or it was simply dirt from the effects. :/
BTW, i did this tests again, and my values were pretty much the same.
What i feel in this new version is that at lower buffer settings, everything is smoother at the point i can work at 128 samples with no problem. Not with this project though. I haven't tested this project with other buffer settings than the recommended.
Cheers,
MC
Well, i don't doubt it, but all the rave about it at some point was talking about hundreds of tracks with thousands of effects, but that's something that was already achieved with a I7 12700 or an I9 12900, at a fraction of the price (i tested my build wich is quite middle line and it sustained more than 500 tracks with everything on the SSL processing audio). I have yet to simmilar testing on cubase, but when i saw the first "raves" i took it with a grain of salt. Yes it added stability but what these guys were raving using default logic effects was rendered redundant after 12th gen Intels or even the latest AMD's. Oh and toss an M2 drive on a pcie4 or 5 and the ssd access times are simmilar to any other. You are limited by the SSD speeds, not the cpu, pcie bus or memory.Re8et wrote: ↑20 Oct 2022
If everyone says the new M1 is really good, there must be some truth behind it...
Power draw of 20 Watts when running a Reason project with tons of plugins, midi, ext audio in-out processing.
Complete silent operation.
The faster SSD time access you can pull of any pc build out there that I know of.
(...)
There's no reason to fight over this and starting a new Mac vs PC rant, but these last 2 years have seen such a huge jump at any platform, that anything we say today will be wrong, or at least very old news tomorrow!
I also agree with Selig, reason is not the best tool to compare performance before it is a native m1 app. Lets wait and see.
- Carly(Poohbear)
- Competition Winner
- Posts: 2942
- Joined: 25 Jan 2015
- Location: UK
Just an FYI...
I had issues back in Reason 10 with load and export, it was "sort of" down to my ASIO driver and buffer sizes, a high buffer size slowed loading and exporting right down (stupidly down). In the end it was a point update in Reason that sorted out the issue properly.
(note on the same system I had other audio devices that did not show the same issue)..
-
- Information
-
Who is online
Users browsing this forum: No registered users and 6 guests