Creativemind wrote: ↑04 Mar 2019
sleep1979 wrote: ↑04 Mar 2019
. Isnt bounce in place essentialy track freeze ?
Short answer - no lol!
It's similar. Freezing a track, renders that track to audio and replaces the track with it (whether it's midi or audio) and thus frees up CPU (data and fx you may have used on that track are still there but rendered to audio / hidden) and it allows you to unfreeze it later if you want to tweak further. Never used it myself but this is what I was told.
So with bounce in place, you'd be doing the same thing but bounced on a different track essentially but it wouldn't save CPU 'cause the other track would still be there but with the clip/s muted which as far as I gather doesn't save CPU like freezing. If anyone more knowledgeable can add a more scientific / mathmatic definition then it'd be interesting to know.
The tasks Bounce in place does, are the bottom line for Track Freezing, but track freezing does a lot more. First and foremost, it is completely transparent in the graphic context. You know a track is frozen but you don't have 2 tracks, you have one that is in the frozen status. Also, if it has any VST device, it will be unloaded. The same happens to all the inserts in the audio path, so if you freeze the track its inserts are unloaded too. The DAW might still have representations of the VST's, but they do this with temp images.
Frozen tracks are usually read only, i.e. you cannot edit any parameter either from the midi info, the generated audio file, or the audio stuff except for some mixer controls like faders and aux sends.
To edit the track you unfreeze the track and then you can not only work on its parameters, as you also can change midi data if a melody is not right and so on.
Bounce in place is not completely perfect because though there isn't any audio or midi being played, the devices are not unloaded. This poses an issue, where CPU is still used in a device on standby. While audio processors "know" when audio passes through them and have functionality to reduce the processor use to near 0, that cannot be done in midi devices because when you're playing midi from a controller, the device never knows when you're going to hit a key or produce a note. So while audio detection on audio devices is quite effective because the sequencer knows when you're going to do it, and even while recording, you instruct the sequencer that you're recording and you're using direct monitoring and the device is informed audio is coming, in midi it simply doesn't work like that because the sequencer simply doesn't know when you're going to record, or play in a live mode (specially when reason allows you to connect up to 4 midi controllers and control up to 64 sequencer tracks to the hardware midi device). But this is not a problem with reason per se, all other daws function like this as regards to live audio and midi devices/vst's.
The other thing that is not perfect about bounce in place is that you always get with a second track and a second mixer channel. This makes big projects (that are the ones you need freezing the most) hard to manage, and you can't delete the originating track if you didn't completely commit to that (special in midi tracks).
I use reason for 14 years now. I'm lucky enough that my projects don't make me bounce in place a lot. Projects with respire, and other synths usually are the ones where i start bouncing in place early in the sequencing part. That's what you don't want, adding a couple of synths and have to bounce at that stage. Because in those early stages of writing there's a lot of sound discovery and design going on. But that doesn't happen a lot to me, so i probably can manage through composing/arranging, recording audio instruments and vocals, and even mix a song without a single Bounce in place. I usually leave mastering for another project. If you want to do everything in the same project then you might need a better machine, specially if you use a lot of synths in the first place (and if a lot of them are ReSpire, Expanse and so on).