GPU Audio
-
- Posts: 159
- Joined: 17 Jan 2015
I am not sure if anyone else has seen the videos coming out of last yers NAMM that highlight audio plugins that are processed via a computers GPU. There is a company putting out an amazing convolution reverb that sounds absolutely amazing. GPU Audio (great name).
Anyway, I hope this takes off and is implemented in future updates of our Mac and Windows operating systems or in our DAWs.
Anyway, I hope this takes off and is implemented in future updates of our Mac and Windows operating systems or in our DAWs.
-
- Posts: 2599
- Joined: 03 May 2020
Interesting stuff. Maybe in the future we'll be requesting simpler UIs and a return to low-res graphics in order to free up the GPU!
-
- RE Developer
- Posts: 12111
- Joined: 15 Jan 2015
- Location: The NorthWoods, CT, USA
Interesting parallel developments: Folks are moving away from accelerated systems like Pro Tools and UAD because modern CPUs have more than enough power for the biggest projects. At the same time, other folks are developing accelerated systems using the GPU to take the load off of the CPU… Film at 11.
Selig Audio, LLC
-
- Posts: 571
- Joined: 03 May 2022
I don’t think we’ll have to worry on that front. The GPU capacity is exploding, it’s quite incredible to see after things leveled off in the CPU world (well apart from adding more cores).
The overall power we have at our disposal is staggering, but we’ll still max it out and crave more!
Software: Reason 12 + Objekt, Vintage Vault 4, V-Collection 9 + Pigments, Vintage Verb + Supermassive
Hardware: M1 Mac mini + dual monitors, Launchkey 61, Scarlett 18i20, Rokit 6 monitors, AT4040 mic, DT-990 Pro phones
Hardware: M1 Mac mini + dual monitors, Launchkey 61, Scarlett 18i20, Rokit 6 monitors, AT4040 mic, DT-990 Pro phones
-
- Competition Winner
- Posts: 4072
- Joined: 16 Jan 2015
Some DSP algorithms translate well to the GPU. And some don't.
Additive synthesis and some types of differentiable systems (including "AI") could see a 30x speedup.
Differentiable systems are insanely good at modelling sounds and imagined cross-breeds.
On the other hand, there seems to be little to no evidence that the strengths of the GPU offer gains meriting attention.
...
...
I think you might see the GPU being used for offline processes.
Think of a hybrid sampler/physical modelling synth that builds sampled instruments on the GPU that can then be played as a sampled instrument on the CPU.
Additive synthesis and some types of differentiable systems (including "AI") could see a 30x speedup.
Differentiable systems are insanely good at modelling sounds and imagined cross-breeds.
On the other hand, there seems to be little to no evidence that the strengths of the GPU offer gains meriting attention.
...
...
I think you might see the GPU being used for offline processes.
Think of a hybrid sampler/physical modelling synth that builds sampled instruments on the GPU that can then be played as a sampled instrument on the CPU.
-
- Posts: 4437
- Joined: 19 Jan 2015
it was bound to happen. the only surprising thing is that we get to witness firsthand the shift from “we need more power! I want to stop having to freeze tracks to keep up!” to “oh hey, I didn’t even have to raise my buffer size once for this project—that’s cool!”selig wrote: ↑11 Nov 2023Interesting parallel developments: Folks are moving away from accelerated systems like Pro Tools and UAD because modern CPUs have more than enough power for the biggest projects. At the same time, other folks are developing accelerated systems using the GPU to take the load off of the CPU… Film at 11.
of course there will always be new plugins that are more demanding, but the increase in processing power is arguably far outstripping the increase in processing overhead in plugins, on average.
-
- Posts: 832
- Joined: 30 Dec 2020
- Location: East Bay, California
I used to look forward to small increases in CPU performance every few years.
Now that Apple has left Intel behind? The speed is starting to feel silly. Who needs to involve a GPU?
I will probably never choke Reason unless somebody asks me to mix something symphonic. And even then maybe not!
(Of course, I'd decline that project because for that amount of time I'd need to get paid a lot more money than my mix would deserve. )
I suppose its possible using the GPU would enable some revolutionary improvement, but would I find it compelling? Hmmm.
Now that Apple has left Intel behind? The speed is starting to feel silly. Who needs to involve a GPU?
I will probably never choke Reason unless somebody asks me to mix something symphonic. And even then maybe not!
(Of course, I'd decline that project because for that amount of time I'd need to get paid a lot more money than my mix would deserve. )
I suppose its possible using the GPU would enable some revolutionary improvement, but would I find it compelling? Hmmm.
- Shocker: I have a SoundCloud!
-
- Posts: 2599
- Joined: 03 May 2020
It's about five years since CPU power was last an issue for me making music. These days even a fairly modest system can do the job without having to pay too much attention to CPU. Of course, you can overload even the fastest modern system if you go for it but CPU is just not a major issue any longer. Quite a contrast to when virtual instruments first appeared. Back then it really did seem like they were making software for hardware that did not exist yet.
-
- Competition Winner
- Posts: 4072
- Joined: 16 Jan 2015
Always bear in mind that all of those analogue modelling synths your CPU can handle are cutting a lot of corners.
We don't notice most of the CPU brick walls because they're so prohibitively CPU intensive that they're not even on the menu.
You should be able to feed modular synths into each other in a feedback loop with sub sample feedback responsiveness.
It's not that DSP engineers don't know how to do it. Uhe has done a fantastic job in this area, but zero delay feedback on the intra device level is a lot more involved.
That being said, these analogue models are good enough for the latest generation of digital keyboard workstations!
We don't notice most of the CPU brick walls because they're so prohibitively CPU intensive that they're not even on the menu.
You should be able to feed modular synths into each other in a feedback loop with sub sample feedback responsiveness.
It's not that DSP engineers don't know how to do it. Uhe has done a fantastic job in this area, but zero delay feedback on the intra device level is a lot more involved.
That being said, these analogue models are good enough for the latest generation of digital keyboard workstations!
-
- Posts: 571
- Joined: 03 May 2022
That's the sort of stuff GPU processing eats for breakfast, so interesting times ahead!avasopht wrote: ↑14 Nov 2023
You should be able to feed modular synths into each other in a feedback loop with sub sample feedback responsiveness.
It's not that DSP engineers don't know how to do it. Uhe has done a fantastic job in this area, but zero delay feedback on the intra device level is a lot more involved.
Software: Reason 12 + Objekt, Vintage Vault 4, V-Collection 9 + Pigments, Vintage Verb + Supermassive
Hardware: M1 Mac mini + dual monitors, Launchkey 61, Scarlett 18i20, Rokit 6 monitors, AT4040 mic, DT-990 Pro phones
Hardware: M1 Mac mini + dual monitors, Launchkey 61, Scarlett 18i20, Rokit 6 monitors, AT4040 mic, DT-990 Pro phones
-
- RE Developer
- Posts: 12111
- Joined: 15 Jan 2015
- Location: The NorthWoods, CT, USA
If that one limitation is the only thing that’s holding us back, then I don’t see the problem because I can’t remember the last time I wanted to feed modular synths into each other in a feedback loop. I know there are some who do this with analog gear regularly (and I don’t do that either) but it has to be a small minority overall.avasopht wrote: ↑14 Nov 2023Always bear in mind that all of those analogue modelling synths your CPU can handle are cutting a lot of corners.
We don't notice most of the CPU brick walls because they're so prohibitively CPU intensive that they're not even on the menu.
You should be able to feed modular synths into each other in a feedback loop with sub sample feedback responsiveness.
It's not that DSP engineers don't know how to do it. Uhe has done a fantastic job in this area, but zero delay feedback on the intra device level is a lot more involved.
That being said, these analogue models are good enough for the latest generation of digital keyboard workstations!
That said and unless I’m misunderstanding you, this is only important if you’re wanting to precisely model specific qualities of analog modular gear. Even then, most of the important qualities are being modeled accurately these days, and I have to feel we are moving towards the idea of new digital synths that are moving beyond analog modeling with AI abilities no analog synth could even dream of.
I have no idea where things are headed, but I feel like analog modeling has been “good enough” to make hits for may years now. I see software instruments as being like instruments such as the Rhodes piano, which was initially designed to be a portable replacement for acoustic piano. But like the Rhodes, software synths were initially a cheaper way to get those sounds into the hands of musicians yet are now becoming instruments in their own right IMO. And like the Rhodes, where we no longer care that it doesn’t sound at all like a real piano (and love it for what makes it unique) I see the same thing happening to software synths.
I only see this trend progressing. And while the CPU power is growing and one day will certainly allow near perfect modeling of all analog circuits without breaking a sweat, some amazing things are being done with software instruments that don’t require 100% analog modeling to make fantastic new sounds. That’s how I see it from my personal vantage point, I’m certainly missing some aspects of these developments but feel I got the big picture fairly well described based on my limited knowledge. Will be interested in how others see things progressing…
Selig Audio, LLC
-
- RE Developer
- Posts: 853
- Joined: 13 Mar 2015
Know guys from GPU Audio, look to progress of their project all this time from start, they works hard last years. Good results and big plans.
-
- Posts: 832
- Joined: 30 Dec 2020
- Location: East Bay, California
Sure, though I think the point is better served with an appeal to physical modeling. Eventually, people will look down their noses at any piano plug-in which doesn't model every atom in the room. Quantum computing, man! But it's legit to ask at what point we're far enough up the curve of diminishing returns that maybe we should reallocate the engineering effort to something else. I don't know what it would be, but…
…whoa. I am clearly not a real artist.
- Shocker: I have a SoundCloud!
-
- Competition Winner
- Posts: 4072
- Joined: 16 Jan 2015
EDIT: I just found out that some analogue models are powered by SPICE. So maybe the stuff I described below are already pretty approachable on the CPU (I've only used SPICE for basic analogue circuits).
Physical modelling with the right UI or "director" interface with some AI-powered bandmates or orchestras (which could just work by taking your recorded MIDI and using it to perform the physical models) could give much more realistic recordings. Until that can run on our machines in real-time as a minimum, we've not reached the point of diminishing returns.
Sidenote: Roland has physical modelling on their keyboard workstations, so it might be CPU friendly already!
Analogue modelling, however, could also be a design tool. It doesn't have to be about emulating old analogue gear, but just be a tool for designing digital synths and effects and could work in conjunction with a Bitwig style Grid, Reaktor style *whatver, you'd call that*, or Reason racks. Having all tools at your disposal without CPU power being a prohibitive limit might open the doors to new intuitive possibilities for sound/instrument/synth design.
You could go all out with experiments like this (but x10):
Or this:
Of course, all possible to some extent with things like Cherry Audio and stuff like that, right? Or with Reason (especially with Complex-1). And I've gotten away with an insanely large number of Thor instances, so what I'm thinking about might already approachable in Reason
Maybe those physical models will just be used to produce more sophisticated, yet concise sampled instruments?? Or even a hybrid that used physical modelling for some expressions that can't be precomputed so well.
As for the GPU, ... look, effects and instruments already make use of SIMD instructions and other CPU performance tricks to accelerate processing (like pipelining) alongside multiprocessing. DSP is something that runs very well with SIMD and what you have on GPUs (which is why they're so important with AI and neural networks).
AMD was going to push for Arithmetic Processing Units, but I think OpenCL and CUDA put that to bed. NVidia's added the tensor cores to their GPU, but they are much better placed on the CPU.
Until CPUs were multicore, however, that was always a stretch. You needed the right memory architecture and memory bus, for a start.
Anyway, Apple's M1 turned this all on its head with it all plonked on the same chip.
Buuut, ... those who need the most SIMD (which is more SIMT or SMT now) such as ML researchers/engineers or graphics processing, you probably want much more than you'd fit in a consumer processor (like an NVidia A100), so you might not see an Intel or AMD processor go as far as the M1 with putting those tensor cores on the CPU because it'll never meet those needs.
Just know that it's the exact same parallelism that's been helping to power DSP since the MMX extensions and multiple cores.
It's only because of a random course of events that those DSP-aligned cores are on the GPU and not the CPU. The book, Good Strategy, Bad Strategy, explains why this is the case (and why NVidia dominate this area).
I agree. I just gave that as an example, not a flagship feature.integerpoet wrote: ↑14 Nov 2023Sure, though I think the point is better served with an appeal to physical modeling. Eventually, people will look down their noses at any piano plug-in which doesn't model every atom in the room. Quantum computing, man! But it's legit to ask at what point we're far enough up the curve of diminishing returns that maybe we should reallocate the engineering effort to something else. I don't know what it would be, but…
Physical modelling with the right UI or "director" interface with some AI-powered bandmates or orchestras (which could just work by taking your recorded MIDI and using it to perform the physical models) could give much more realistic recordings. Until that can run on our machines in real-time as a minimum, we've not reached the point of diminishing returns.
Sidenote: Roland has physical modelling on their keyboard workstations, so it might be CPU friendly already!
Analogue modelling, however, could also be a design tool. It doesn't have to be about emulating old analogue gear, but just be a tool for designing digital synths and effects and could work in conjunction with a Bitwig style Grid, Reaktor style *whatver, you'd call that*, or Reason racks. Having all tools at your disposal without CPU power being a prohibitive limit might open the doors to new intuitive possibilities for sound/instrument/synth design.
You could go all out with experiments like this (but x10):
Or this:
Of course, all possible to some extent with things like Cherry Audio and stuff like that, right? Or with Reason (especially with Complex-1). And I've gotten away with an insanely large number of Thor instances, so what I'm thinking about might already approachable in Reason
Maybe those physical models will just be used to produce more sophisticated, yet concise sampled instruments?? Or even a hybrid that used physical modelling for some expressions that can't be precomputed so well.
As for the GPU, ... look, effects and instruments already make use of SIMD instructions and other CPU performance tricks to accelerate processing (like pipelining) alongside multiprocessing. DSP is something that runs very well with SIMD and what you have on GPUs (which is why they're so important with AI and neural networks).
AMD was going to push for Arithmetic Processing Units, but I think OpenCL and CUDA put that to bed. NVidia's added the tensor cores to their GPU, but they are much better placed on the CPU.
Until CPUs were multicore, however, that was always a stretch. You needed the right memory architecture and memory bus, for a start.
Anyway, Apple's M1 turned this all on its head with it all plonked on the same chip.
Buuut, ... those who need the most SIMD (which is more SIMT or SMT now) such as ML researchers/engineers or graphics processing, you probably want much more than you'd fit in a consumer processor (like an NVidia A100), so you might not see an Intel or AMD processor go as far as the M1 with putting those tensor cores on the CPU because it'll never meet those needs.
Just know that it's the exact same parallelism that's been helping to power DSP since the MMX extensions and multiple cores.
It's only because of a random course of events that those DSP-aligned cores are on the GPU and not the CPU. The book, Good Strategy, Bad Strategy, explains why this is the case (and why NVidia dominate this area).
Last edited by avasopht on 20 Nov 2023, edited 1 time in total.
-
- Competition Winner
- Posts: 4072
- Joined: 16 Jan 2015
Definitely a small minority.selig wrote: ↑14 Nov 2023If that one limitation is the only thing that’s holding us back, then I don’t see the problem because I can’t remember the last time I wanted to feed modular synths into each other in a feedback loop. I know there are some who do this with analog gear regularly (and I don’t do that either) but it has to be a small minority overall.
I wasn't advocating for it or suggesting it was important for everyday music making, just pointing out the self-selection effect that results in analogue models being able to run in real-time on CPUs (because they're cutting a few corners to make it run in real-time on our CPUs).
MeldaProduction, however, scoffs at analogue-modelling and says it's a waste of CPU power
They said you can get the same result with approaches that are better suited for the digital domain.
Either way, most DSP is very well suited for the components that are in your GPU, and those components are largely on the GPU through mere happenstance (hence why SIMD and sometimes multiprocessing are utilized in real-time DSP).
In DSP and computer science, we have the straight-forward theoretical formula that will work given enough computing power, and what we actually use (optimisations for real-time computation, ... some faking and compromise).
You see it with video game AI. Games don't typically use the AI theory covered in uni. Instead, it's all hand-tuned hacks, dice rolls, and smokes and mirrors.
---
But as I said before, I'm spending most of my time making music on a CPU on par with a 2009 iMac processor (that's roughly how powerful the MPC processor is).
-
- Competition Winner
- Posts: 4072
- Joined: 16 Jan 2015
I think I need to really sit down and see what my CPU is capable of.
I've shied away from doing a lot of modular synthesis on the CPU because it always feels like I'm running on a cheap imitation, but feed-forward modular synthesis is much better done on CPUs than with analogue hardware IMO.
I'm definitely with Melda on that one.
I've shied away from doing a lot of modular synthesis on the CPU because it always feels like I'm running on a cheap imitation, but feed-forward modular synthesis is much better done on CPUs than with analogue hardware IMO.
I'm definitely with Melda on that one.
-
- Competition Winner
- Posts: 109
- Joined: 06 Jul 2015
- Location: Milano
Audio Modeling partnership with GPU Audio
Music Will Save Us
-
- Information
-
Who is online
Users browsing this forum: CommonCrawl [Bot] and 3 guests