I'm using a song with 1 reason 9 new pluck sound, 3 A-list Acoustic Guitarist (two of which are combi, but no heavy fx in them) and 1 A-list Studio Drummer, and 1 vocal channel with about 4-5 effects.
When I start the session, without playing anything, reason uses up about 3.4 gigs of RAM (memory usage shot up from 2.1 gb to 5.5 gb).
For just this many things, I find it abnormal for reason to use so much memory. Anyone else find it weird? (And I don't know why the task manager only shows 2.2 gb used, but it in fact shot from 2.1 to 5.5).
Reason Memory Usage - is this normal?
- Carly(Poohbear)
- Competition Winner
- Posts: 2928
- Joined: 25 Jan 2015
- Location: UK
I have seen this on my projects, I did notice it more when moving to Reason 9 but to be fair this was not something I was normally monitoring when using Reason 7.
As Norman said, Samples take up a lot of memory and for the peek usage at the start was the Calc indicator processing (on the transport bar)? so any samples loaded that are at a different rate will have be converted to your current sample rate..
As Norman said, Samples take up a lot of memory and for the peek usage at the start was the Calc indicator processing (on the transport bar)? so any samples loaded that are at a different rate will have be converted to your current sample rate..
Doesn't at all look worrying to me, i besides what others mentioned there are also audio tracks in there. So unless you are running out of memory, which you don't at 68% utilization, i think you're fine.
V9 | i7 5930 | Motu 828 MK3 | Win 10
This in general - looking at metrics you don't fully understand can be deceiving.. Seeing many beginner programmers using the game engine I co-developed getting caught up in using profilers and trying to solve non-problems cause they misinterpret the displayed data I have some experience with this. One should always consider that in most cases you probably are much like this guy:eauhm wrote:Doesn't at all look worrying to me, i besides what others mentioned there are also audio tracks in there. So unless you are running out of memory, which you don't at 68% utilization, i think you're fine.
..so best only start worrying if you have actual problems
I agree with normen here. It can get confusing.normen wrote:This in general - looking at metrics you don't fully understand can be deceiving.. Seeing many beginner programmers using the game engine I co-developed getting caught up in using profilers and trying to solve non-problems cause they misinterpret the displayed data I have some experience with this. One should always consider that in most cases you probably are much like this guy:eauhm wrote:Doesn't at all look worrying to me, i besides what others mentioned there are also audio tracks in there. So unless you are running out of memory, which you don't at 68% utilization, i think you're fine.
..so best only start worrying if you have actual problems
I'm probably not the right person to comment here, i've never botherd keeping my MCSE up to date, as i'm a currently more a HP/UX and Redhat engineer
V9 | i7 5930 | Motu 828 MK3 | Win 10
Hi guys, thanks for the replies.
While playing the song, the utilization jumps up to around 82-84% and in additiong, the CPU drops dead. Simply minimizing the reason window will get me "computer too slow to play" dialog.
Of course, without a dedicated system for the purposes of music, I expected this, but even then with such few instruments (and only one audio track) I expected lesser usage. But if sample-based REs take up more memory, it makes sense (in fact now I'm curious how Windows and Reason, or any DAW, affects the RAM as well as the CPU usage while it is in session, especially after how Win 10 now seems to not save much to virtual memory).
If you can give me a bit of explanation on that, it'd be great (say comparing RAM and the processor when using a synth vs sample-based instrument).
Thanks again!
While playing the song, the utilization jumps up to around 82-84% and in additiong, the CPU drops dead. Simply minimizing the reason window will get me "computer too slow to play" dialog.
Of course, without a dedicated system for the purposes of music, I expected this, but even then with such few instruments (and only one audio track) I expected lesser usage. But if sample-based REs take up more memory, it makes sense (in fact now I'm curious how Windows and Reason, or any DAW, affects the RAM as well as the CPU usage while it is in session, especially after how Win 10 now seems to not save much to virtual memory).
If you can give me a bit of explanation on that, it'd be great (say comparing RAM and the processor when using a synth vs sample-based instrument).
Thanks again!
Reason 12 | Preset Browser | Refill Hoarder
There is SO many factors here. First of all DSP load is not CPU load. DSP load is how much data can be processed in a certain amount of time which equals the buffer size you set. So you have 256 samples of audio data at a sample rate of 44.1kHz, that means that every 5.8 milliseconds the data packet has to be delivered or you will have dropouts (or the DAW telling you it couldn't do its job).
So a big factor (to a certain extent) can be your buffer size. I say to a certain extent because this time is "eaten up" by many things. First of all stuff has to be moved around in memory. Then your CPU might have to process some other stuff, like transferring data to or from some USB device before it can start to process your audio data. Then finally the CPU begins to process your data which is when its raw power comes into play. As these other things often take a basically fixed amount of time the relative amount of time they take in respect to the buffer size will become smaller the larger your buffer size is, making the raw power more important again.
So if we say in this example the memory copying and the USB "blocking" took 2 milliseconds that leaves 3.8 milliseconds for the CPU to do raw processing. If you up the buffer size to 512 samples that would be 11.6 milliseconds, minus 2 milliseconds that already leaves 9.6 milliseconds for raw processing - three times as much while you only doubled the buffer size/time.
This is basically why it can be relatively complicated to set up a system for processing audio, just having a fast CPU won't guarantee success. This is also why even when your DSP meter shows 100% you will rarely see your CPU meter go to 100% at the same time. This is a very typical case of "Yep, thats the engine" where people start to blame the DAW for not even using all the CPU power and already stuttering. The base problem here is that CPUs are general purpose processing devices, designed for overall throughput, not fast processing in a minimum amount of time (like DSP chips).
As for memory use, in general sample based audio generators use more memory while algorithm based audio generators use less. After all with the algorithm based generator you only need to store the algorithm itself in memory, which can be a simple line of code. Samples on the other hand take up lots of memory.
In terms of CPU use samples are generally easier on the CPU as they basically just have to be copied to the audio output and often the only computation that has to be done is simple addition with the other samples that play at the same time. Algorithmic generators (like most synths) on the other hand mainly use the CPU to create the audio in the first place, using way more complicated computations like sine/cosine and such, meaning they put a greater strain on the CPU. Impulse response based generators can be pretty intense on both sides because impulse responses are basically samples but they also need complex computations to work.
But! If we take into account what I said before sample based instruments can create high DSP load as well because although the CPU doesn't have to compute much there is much more data that has to be copied in memory, loaded from disk etc. This might not register in a CPU meter but will surely register in the DSP meter.
So as said, its a relatively complicated matter - obviously more state of the art machines will give you better results but given how complex the whole system is (hardware, OS layers, drivers, software layers) even a very beefy machine can perform quite poorly - and vice versa.
So a big factor (to a certain extent) can be your buffer size. I say to a certain extent because this time is "eaten up" by many things. First of all stuff has to be moved around in memory. Then your CPU might have to process some other stuff, like transferring data to or from some USB device before it can start to process your audio data. Then finally the CPU begins to process your data which is when its raw power comes into play. As these other things often take a basically fixed amount of time the relative amount of time they take in respect to the buffer size will become smaller the larger your buffer size is, making the raw power more important again.
So if we say in this example the memory copying and the USB "blocking" took 2 milliseconds that leaves 3.8 milliseconds for the CPU to do raw processing. If you up the buffer size to 512 samples that would be 11.6 milliseconds, minus 2 milliseconds that already leaves 9.6 milliseconds for raw processing - three times as much while you only doubled the buffer size/time.
This is basically why it can be relatively complicated to set up a system for processing audio, just having a fast CPU won't guarantee success. This is also why even when your DSP meter shows 100% you will rarely see your CPU meter go to 100% at the same time. This is a very typical case of "Yep, thats the engine" where people start to blame the DAW for not even using all the CPU power and already stuttering. The base problem here is that CPUs are general purpose processing devices, designed for overall throughput, not fast processing in a minimum amount of time (like DSP chips).
As for memory use, in general sample based audio generators use more memory while algorithm based audio generators use less. After all with the algorithm based generator you only need to store the algorithm itself in memory, which can be a simple line of code. Samples on the other hand take up lots of memory.
In terms of CPU use samples are generally easier on the CPU as they basically just have to be copied to the audio output and often the only computation that has to be done is simple addition with the other samples that play at the same time. Algorithmic generators (like most synths) on the other hand mainly use the CPU to create the audio in the first place, using way more complicated computations like sine/cosine and such, meaning they put a greater strain on the CPU. Impulse response based generators can be pretty intense on both sides because impulse responses are basically samples but they also need complex computations to work.
But! If we take into account what I said before sample based instruments can create high DSP load as well because although the CPU doesn't have to compute much there is much more data that has to be copied in memory, loaded from disk etc. This might not register in a CPU meter but will surely register in the DSP meter.
So as said, its a relatively complicated matter - obviously more state of the art machines will give you better results but given how complex the whole system is (hardware, OS layers, drivers, software layers) even a very beefy machine can perform quite poorly - and vice versa.
so how mutch ram does the bare bones sequencer and rack take when running
Reason 12 ,gear4 music sdp3 stage piano .nektar gxp 88,behringer umc1800 .line6 spider4 30
hear scince reason 2.5
hear scince reason 2.5
Ehem, NO. If anything is causing more RAM usage due to the new graphics infrastructure it is only related to the screen resolution. The zoom level only plays a role once when the graphics are rescaled for the selected zoom. After this its just the same number of pixels as your screen resolution is.
I also just checked with R12.6 and the empty rack+sequencer use 853 MiB of RAM on my W10 system.
-
- Information
-
Who is online
Users browsing this forum: No registered users and 3 guests