Combining the power of 2 computers for Reason
The only time I've seen such a setup work was with samplers hosted on one computer, and the main DAW on another, and the motivation for that particular setup was to host huge sample libraries online all at the same time. Was a pain to work with (since I'm used to using one integrated system), but the guy who owned the studio loved it.
Selig Audio, LLC
Yeah, that seems to be the only way.selig wrote: ↑13 Aug 2019The only time I've seen such a setup work was with samplers hosted on one computer, and the main DAW on another, and the motivation for that particular setup was to host huge sample libraries online all at the same time. Was a pain to work with (since I'm used to using one integrated system), but the guy who owned the studio loved it.
It would have been useful for heavy tasks, such as processing 32 bit / 768 kHz audio - which allows tons of heavily destructive processing with digital sample-based synthesis, before artifacts start building up.
One can see what remains of an 8 second note sample after a dozen of re-sampling with various processing applied (especially FM effects on a sample) - it can end up nasty with less and less processing possibilities, AND not to mention when it comes to mapping the finished sample to keys that are 1-2 octaves higher/lower.
EDIT: Oops, the 32 / 768 thing is for Reaktor, and weird things. : )
Because rendering is not real time. You could certainly use a CPU farm to offline render audio, but it wouldn't make sense because if your CPU can't play the audio in real time, you can't mix or overdub. And if you CAN play the audio in real time, then you can already render faster than real time, so there's no need for a render farm.
Selig Audio, LLC
Again, offline processing is totally different from typical music production. Also note that multiple CPUs would require advanced software to distribute the CPU tasks, and then to re-integrate them back into the live stream which would both take considerable CPU cycles on their own - mostly negating any advantage to multiple CPUs in the first place. And there would be no advantage to multiple CPUs if the act of managing the data is going to cause skipping and gaps, right? That's the very thing you're trying to avoid in the first place!RobC wrote: ↑13 Aug 2019Yeah, that seems to be the only way.selig wrote: ↑13 Aug 2019The only time I've seen such a setup work was with samplers hosted on one computer, and the main DAW on another, and the motivation for that particular setup was to host huge sample libraries online all at the same time. Was a pain to work with (since I'm used to using one integrated system), but the guy who owned the studio loved it.
It would have been useful for heavy tasks, such as processing 32 bit / 768 kHz audio - which allows tons of heavily destructive processing with digital sample-based synthesis, before artifacts start building up.
One can see what remains of an 8 second note sample after a dozen of re-sampling with various processing applied (especially FM effects on a sample) - it can end up nasty with less and less processing possibilities, AND not to mention when it comes to mapping the finished sample to keys that are 1-2 octaves higher/lower.
EDIT: Oops, the 32 / 768 thing is for Reaktor, and weird things. : )
Selig Audio, LLC
Ah, I didn't realize they are rather used for rendering. And meanwhile reading your second reply I see it's not as simple as I thought.selig wrote: ↑13 Aug 2019Because rendering is not real time. You could certainly use a CPU farm to offline render audio, but it wouldn't make sense because if your CPU can't play the audio in real time, you can't mix or overdub. And if you CAN play the audio in real time, then you can already render faster than real time, so there's no need for a render farm.
It's just that I don't know what to expect for over-sampling. Then again, processing a static monophonic, mono sample shouldn't be that demanding. Tops when it comes to sample-based multi-operator FM processing. However, oversampling might weaken the destructive effect that happens with pitch "distortion" (raising a sample's pitch, then rendering/re-sampling that).
One word: Latency
rewire
?
multiple liscenced computers... got 9.5 and 10 separate... but never tried yet
says 2 licenses
?
multiple liscenced computers... got 9.5 and 10 separate... but never tried yet
says 2 licenses
https://soundcloud.com/moneykube-qube/s ... d-playlist
Proud Member Of The Awesome League Of Perpetuals
Proud Member Of The Awesome League Of Perpetuals
To be clear, are you talking about real time or non real time processing here? They are two completely different things, especially as it applies to CPU resources.RobC wrote: ↑13 Aug 2019Ah, I didn't realize they are rather used for rendering. And meanwhile reading your second reply I see it's not as simple as I thought.selig wrote: ↑13 Aug 2019
Because rendering is not real time. You could certainly use a CPU farm to offline render audio, but it wouldn't make sense because if your CPU can't play the audio in real time, you can't mix or overdub. And if you CAN play the audio in real time, then you can already render faster than real time, so there's no need for a render farm.
It's just that I don't know what to expect for over-sampling. Then again, processing a static monophonic, mono sample shouldn't be that demanding. Tops when it comes to sample-based multi-operator FM processing. However, oversampling might weaken the destructive effect that happens with pitch "distortion" (raising a sample's pitch, then rendering/re-sampling that).
I answered the question assuming real time was your goal, but your later comments seem to possibly imply offline rendering, no?
Selig Audio, LLC
You probably could buy an audio interface with midi i/o and treat the laptop as an external instrument. So once you start getting to the limit on your main computer, you could start using your laptop externally.
Another method would be using Ableton Link (I believe Reason is compatible) and as long as PCs are on same wifi network you can compose independently on both machines and export the audio into the other computer.
Not the same what I wanted. I sort of thought that multiple devices could combine their 'brain power'. But yeah, it's like two people can work together, but doesn't mean they will be twice as intelligent as one.
It sort of has rendering parts, yes, but I will need to hear what's going on, real time, during sound design. If the computer can handle the audio real time, then rendering 8 seconds of it won't take much time.selig wrote: ↑14 Aug 2019To be clear, are you talking about real time or non real time processing here? They are two completely different things, especially as it applies to CPU resources.RobC wrote: ↑13 Aug 2019
Ah, I didn't realize they are rather used for rendering. And meanwhile reading your second reply I see it's not as simple as I thought.
It's just that I don't know what to expect for over-sampling. Then again, processing a static monophonic, mono sample shouldn't be that demanding. Tops when it comes to sample-based multi-operator FM processing. However, oversampling might weaken the destructive effect that happens with pitch "distortion" (raising a sample's pitch, then rendering/re-sampling that).
I answered the question assuming real time was your goal, but your later comments seem to possibly imply offline rendering, no?
Also, thank you as well as others' input and help!
While a useful idea/solution, this was more about trying to make two computers effectively do the same task real time - especially handling very heavy oversampling.jlgrimes wrote: ↑14 Aug 2019
You probably could buy an audio interface with midi i/o and treat the laptop as an external instrument. So once you start getting to the limit on your main computer, you could start using your laptop externally.
Another method would be using Ableton Link (I believe Reason is compatible) and as long as PCs are on same wifi network you can compose independently on both machines and export the audio into the other computer.
By the way, when it comes to oversampling, I won't need to buy a 768000 Hz supporting DAC, right? I mean, from what I've read in other threads, it seems, nothing more than 44.100 Hz is needed when it comes to digital to analog conversion itself.selig wrote: ↑14 Aug 2019To be clear, are you talking about real time or non real time processing here? They are two completely different things, especially as it applies to CPU resources.RobC wrote: ↑13 Aug 2019
Ah, I didn't realize they are rather used for rendering. And meanwhile reading your second reply I see it's not as simple as I thought.
It's just that I don't know what to expect for over-sampling. Then again, processing a static monophonic, mono sample shouldn't be that demanding. Tops when it comes to sample-based multi-operator FM processing. However, oversampling might weaken the destructive effect that happens with pitch "distortion" (raising a sample's pitch, then rendering/re-sampling that).
I answered the question assuming real time was your goal, but your later comments seem to possibly imply offline rendering, no?
Technically you can quite easily get down to single digit millisecond latency with a bandwidth enough to send audio between computers on a local network. Stable clock between computers is another problem, but Ableton Link or similar is one solution for that.
I think the answer is that there isn't a strong demand for it, so no DAW developer have spent development resources on it.
- submonsterz
- Posts: 989
- Joined: 07 Feb 2015
I had no problem at all with two comps linked via ethernet/crossover no timing issues etc worked a charm only butt stop was the multiple licences on my favourite plugins bullshit i wont pay twice for same pluggins i own all ready just to use the feature reaper has offered for years that works my end flawlessly. And peter p to say theres no demand for it theres plenty of demand just people wont be shafted in the multiple license bullshit that comes with it .so its not about demand its about the greed bullshit that makes people not rave about it i know many people that would love the feature for our favourite audio applications and our growing number of computers left from upgrading etc .
I've certainly been thinking about combining Reason with other sequencers. I love hardware so just minutes ago I looked at a MC-500 from 1987, I just love it and I want to get it if it's still available. I've got tons of sequencers but they are just a means to capture whatever might pop up so why not. Just about all of my hw keyboards are workstations nowadays, but seeing a demo of this one made me miss old junos. But Reason can be used as a multitimbral synth module and it can be exciting to try this out in order to create things in a different way.
Hmm, so if I want to make use of the 768000 Hz oversampling in Reaktor, I'll need such DAC after all. Cause so far, most threads, blogs, forums and whatnot said that 44100 Hz DAC is all one needs. Kind of confused there, like, could we make Reason work at 192000 Hz, while using a 44100 Hz sound card? Maybe ASIO4ALL can do that trick...
Nah, google for Audio Myths...
-
- Information
-
Who is online
Users browsing this forum: No registered users and 1 guest