Combining the power of 2 computers for Reason

Have an urge to learn, or a calling to teach? Want to share some useful Youtube videos? Do it here!
RobC
Posts: 1970
Joined: 10 Mar 2018

13 Aug 2019

Yeah, so I want to know if there's any way to make both a laptop and a desktop PC work together? Running Reason for example (though Reaktor would be nice, too), combining their processing power, RAM, SSD/HDD, etc.

User avatar
boingy
Posts: 791
Joined: 01 Feb 2019

13 Aug 2019

No. No there isn't.

User avatar
selig
RE Developer
Posts: 12121
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

13 Aug 2019

The only time I've seen such a setup work was with samplers hosted on one computer, and the main DAW on another, and the motivation for that particular setup was to host huge sample libraries online all at the same time. Was a pain to work with (since I'm used to using one integrated system), but the guy who owned the studio loved it.
Selig Audio, LLC

RobC
Posts: 1970
Joined: 10 Mar 2018

13 Aug 2019

boingy wrote:
13 Aug 2019
No. No there isn't.
That's a bummer... Yet 3D animation studios use special CPU-farms. Why not for audio...

RobC
Posts: 1970
Joined: 10 Mar 2018

13 Aug 2019

selig wrote:
13 Aug 2019
The only time I've seen such a setup work was with samplers hosted on one computer, and the main DAW on another, and the motivation for that particular setup was to host huge sample libraries online all at the same time. Was a pain to work with (since I'm used to using one integrated system), but the guy who owned the studio loved it.
Yeah, that seems to be the only way.

It would have been useful for heavy tasks, such as processing 32 bit / 768 kHz audio - which allows tons of heavily destructive processing with digital sample-based synthesis, before artifacts start building up.
One can see what remains of an 8 second note sample after a dozen of re-sampling with various processing applied (especially FM effects on a sample) - it can end up nasty with less and less processing possibilities, AND not to mention when it comes to mapping the finished sample to keys that are 1-2 octaves higher/lower.

EDIT: Oops, the 32 / 768 thing is for Reaktor, and weird things. : )

User avatar
selig
RE Developer
Posts: 12121
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

13 Aug 2019

RobC wrote:
13 Aug 2019
boingy wrote:
13 Aug 2019
No. No there isn't.
That's a bummer... Yet 3D animation studios use special CPU-farms. Why not for audio...
Because rendering is not real time. You could certainly use a CPU farm to offline render audio, but it wouldn't make sense because if your CPU can't play the audio in real time, you can't mix or overdub. And if you CAN play the audio in real time, then you can already render faster than real time, so there's no need for a render farm.
Selig Audio, LLC

User avatar
selig
RE Developer
Posts: 12121
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

13 Aug 2019

RobC wrote:
13 Aug 2019
selig wrote:
13 Aug 2019
The only time I've seen such a setup work was with samplers hosted on one computer, and the main DAW on another, and the motivation for that particular setup was to host huge sample libraries online all at the same time. Was a pain to work with (since I'm used to using one integrated system), but the guy who owned the studio loved it.
Yeah, that seems to be the only way.

It would have been useful for heavy tasks, such as processing 32 bit / 768 kHz audio - which allows tons of heavily destructive processing with digital sample-based synthesis, before artifacts start building up.
One can see what remains of an 8 second note sample after a dozen of re-sampling with various processing applied (especially FM effects on a sample) - it can end up nasty with less and less processing possibilities, AND not to mention when it comes to mapping the finished sample to keys that are 1-2 octaves higher/lower.

EDIT: Oops, the 32 / 768 thing is for Reaktor, and weird things. : )
Again, offline processing is totally different from typical music production. Also note that multiple CPUs would require advanced software to distribute the CPU tasks, and then to re-integrate them back into the live stream which would both take considerable CPU cycles on their own - mostly negating any advantage to multiple CPUs in the first place. And there would be no advantage to multiple CPUs if the act of managing the data is going to cause skipping and gaps, right? That's the very thing you're trying to avoid in the first place!
Selig Audio, LLC

RobC
Posts: 1970
Joined: 10 Mar 2018

13 Aug 2019

selig wrote:
13 Aug 2019
RobC wrote:
13 Aug 2019


That's a bummer... Yet 3D animation studios use special CPU-farms. Why not for audio...
Because rendering is not real time. You could certainly use a CPU farm to offline render audio, but it wouldn't make sense because if your CPU can't play the audio in real time, you can't mix or overdub. And if you CAN play the audio in real time, then you can already render faster than real time, so there's no need for a render farm.
Ah, I didn't realize they are rather used for rendering. And meanwhile reading your second reply I see it's not as simple as I thought.
It's just that I don't know what to expect for over-sampling. Then again, processing a static monophonic, mono sample shouldn't be that demanding. Tops when it comes to sample-based multi-operator FM processing. However, oversampling might weaken the destructive effect that happens with pitch "distortion" (raising a sample's pitch, then rendering/re-sampling that).

User avatar
jam-s
Posts: 3208
Joined: 17 Apr 2015
Location: Aachen, Germany
Contact:

13 Aug 2019

RobC wrote:
13 Aug 2019
boingy wrote:
13 Aug 2019
No. No there isn't.
That's a bummer... Yet 3D animation studios use special CPU-farms. Why not for audio...
One word: Latency

User avatar
moneykube
Posts: 3523
Joined: 15 Jan 2015

14 Aug 2019

rewire
?
multiple liscenced computers... got 9.5 and 10 separate... but never tried yet
says 2 licenses
https://soundcloud.com/moneykube-qube/s ... d-playlist
Proud Member Of The Awesome League Of Perpetuals

User avatar
selig
RE Developer
Posts: 12121
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

14 Aug 2019

RobC wrote:
13 Aug 2019
selig wrote:
13 Aug 2019


Because rendering is not real time. You could certainly use a CPU farm to offline render audio, but it wouldn't make sense because if your CPU can't play the audio in real time, you can't mix or overdub. And if you CAN play the audio in real time, then you can already render faster than real time, so there's no need for a render farm.
Ah, I didn't realize they are rather used for rendering. And meanwhile reading your second reply I see it's not as simple as I thought.
It's just that I don't know what to expect for over-sampling. Then again, processing a static monophonic, mono sample shouldn't be that demanding. Tops when it comes to sample-based multi-operator FM processing. However, oversampling might weaken the destructive effect that happens with pitch "distortion" (raising a sample's pitch, then rendering/re-sampling that).
To be clear, are you talking about real time or non real time processing here? They are two completely different things, especially as it applies to CPU resources.
I answered the question assuming real time was your goal, but your later comments seem to possibly imply offline rendering, no?
Selig Audio, LLC

jlgrimes
Posts: 679
Joined: 06 Jun 2017

14 Aug 2019

RobC wrote:
13 Aug 2019
Yeah, so I want to know if there's any way to make both a laptop and a desktop PC work together? Running Reason for example (though Reaktor would be nice, too), combining their processing power, RAM, SSD/HDD, etc.

You probably could buy an audio interface with midi i/o and treat the laptop as an external instrument. So once you start getting to the limit on your main computer, you could start using your laptop externally.


Another method would be using Ableton Link (I believe Reason is compatible) and as long as PCs are on same wifi network you can compose independently on both machines and export the audio into the other computer.

RobC
Posts: 1970
Joined: 10 Mar 2018

14 Aug 2019

jam-s wrote:
13 Aug 2019
RobC wrote:
13 Aug 2019


That's a bummer... Yet 3D animation studios use special CPU-farms. Why not for audio...
One word: Latency
Yeah, I thought pretty naively, that two computers could work together without problems.

RobC
Posts: 1970
Joined: 10 Mar 2018

14 Aug 2019

moneykube wrote:
14 Aug 2019
rewire
?
multiple liscenced computers... got 9.5 and 10 separate... but never tried yet
says 2 licenses
Not the same what I wanted. I sort of thought that multiple devices could combine their 'brain power'. But yeah, it's like two people can work together, but doesn't mean they will be twice as intelligent as one.

RobC
Posts: 1970
Joined: 10 Mar 2018

14 Aug 2019

selig wrote:
14 Aug 2019
RobC wrote:
13 Aug 2019


Ah, I didn't realize they are rather used for rendering. And meanwhile reading your second reply I see it's not as simple as I thought.
It's just that I don't know what to expect for over-sampling. Then again, processing a static monophonic, mono sample shouldn't be that demanding. Tops when it comes to sample-based multi-operator FM processing. However, oversampling might weaken the destructive effect that happens with pitch "distortion" (raising a sample's pitch, then rendering/re-sampling that).
To be clear, are you talking about real time or non real time processing here? They are two completely different things, especially as it applies to CPU resources.
I answered the question assuming real time was your goal, but your later comments seem to possibly imply offline rendering, no?
It sort of has rendering parts, yes, but I will need to hear what's going on, real time, during sound design. If the computer can handle the audio real time, then rendering 8 seconds of it won't take much time.

Also, thank you as well as others' input and help!

RobC
Posts: 1970
Joined: 10 Mar 2018

14 Aug 2019

jlgrimes wrote:
14 Aug 2019
RobC wrote:
13 Aug 2019
Yeah, so I want to know if there's any way to make both a laptop and a desktop PC work together? Running Reason for example (though Reaktor would be nice, too), combining their processing power, RAM, SSD/HDD, etc.

You probably could buy an audio interface with midi i/o and treat the laptop as an external instrument. So once you start getting to the limit on your main computer, you could start using your laptop externally.


Another method would be using Ableton Link (I believe Reason is compatible) and as long as PCs are on same wifi network you can compose independently on both machines and export the audio into the other computer.
While a useful idea/solution, this was more about trying to make two computers effectively do the same task real time - especially handling very heavy oversampling.

RobC
Posts: 1970
Joined: 10 Mar 2018

14 Aug 2019

selig wrote:
14 Aug 2019
RobC wrote:
13 Aug 2019


Ah, I didn't realize they are rather used for rendering. And meanwhile reading your second reply I see it's not as simple as I thought.
It's just that I don't know what to expect for over-sampling. Then again, processing a static monophonic, mono sample shouldn't be that demanding. Tops when it comes to sample-based multi-operator FM processing. However, oversampling might weaken the destructive effect that happens with pitch "distortion" (raising a sample's pitch, then rendering/re-sampling that).
To be clear, are you talking about real time or non real time processing here? They are two completely different things, especially as it applies to CPU resources.
I answered the question assuming real time was your goal, but your later comments seem to possibly imply offline rendering, no?
By the way, when it comes to oversampling, I won't need to buy a 768000 Hz supporting DAC, right? I mean, from what I've read in other threads, it seems, nothing more than 44.100 Hz is needed when it comes to digital to analog conversion itself.

User avatar
submonsterz
Posts: 989
Joined: 07 Feb 2015

14 Aug 2019


Reaper been able to do it for a long while .....

User avatar
Oquasec
Posts: 2849
Joined: 05 Mar 2017

14 Aug 2019

Nope. you need a interface and a cpu that does at least 2000-3000 bench at max power for radio hits or films.
Producer/Programmer.
Reason, FLS and Cubase NFR user.

PeterP
Posts: 84
Joined: 26 Apr 2016
Location: Gothenburg, Sweden

15 Aug 2019

jam-s wrote:
13 Aug 2019
RobC wrote:
13 Aug 2019


That's a bummer... Yet 3D animation studios use special CPU-farms. Why not for audio...
One word: Latency
Technically you can quite easily get down to single digit millisecond latency with a bandwidth enough to send audio between computers on a local network. Stable clock between computers is another problem, but Ableton Link or similar is one solution for that.

I think the answer is that there isn't a strong demand for it, so no DAW developer have spent development resources on it.

User avatar
submonsterz
Posts: 989
Joined: 07 Feb 2015

15 Aug 2019

I had no problem at all with two comps linked via ethernet/crossover no timing issues etc worked a charm only butt stop was the multiple licences on my favourite plugins bullshit i wont pay twice for same pluggins i own all ready just to use the feature reaper has offered for years that works my end flawlessly. And peter p to say theres no demand for it theres plenty of demand just people wont be shafted in the multiple license bullshit that comes with it .so its not about demand its about the greed bullshit that makes people not rave about it i know many people that would love the feature for our favourite audio applications and our growing number of computers left from upgrading etc .

User avatar
bitley
Posts: 1673
Joined: 03 Jul 2015
Location: sweden
Contact:

15 Aug 2019

I've certainly been thinking about combining Reason with other sequencers. I love hardware so just minutes ago I looked at a MC-500 from 1987, I just love it and I want to get it if it's still available. I've got tons of sequencers but they are just a means to capture whatever might pop up so why not. Just about all of my hw keyboards are workstations nowadays, but seeing a demo of this one made me miss old junos. But Reason can be used as a multitimbral synth module and it can be exciting to try this out in order to create things in a different way.

RobC
Posts: 1970
Joined: 10 Mar 2018

15 Aug 2019

Oquasec wrote:
14 Aug 2019
Nope. you need a interface and a cpu that does at least 2000-3000 bench at max power for radio hits or films.
Hmm, so if I want to make use of the 768000 Hz oversampling in Reaktor, I'll need such DAC after all. Cause so far, most threads, blogs, forums and whatnot said that 44100 Hz DAC is all one needs. Kind of confused there, like, could we make Reason work at 192000 Hz, while using a 44100 Hz sound card? Maybe ASIO4ALL can do that trick...

User avatar
bitley
Posts: 1673
Joined: 03 Jul 2015
Location: sweden
Contact:

15 Aug 2019

Nah, google for Audio Myths... :)

RobC
Posts: 1970
Joined: 10 Mar 2018

15 Aug 2019

bitley wrote:
15 Aug 2019
Nah, google for Audio Myths... :)
I'll look it up, but to speed up the process, for playback, the common 44100 is all one needs, no? (Heck, sometimes I think 30000, cause after 1/2 calculation, you get to the 15 kHz - and above that, in adulthood, not much is heard.)

Post Reply
  • Information
  • Who is online

    Users browsing this forum: No registered users and 1 guest