Would you consider the player generators (QNG, BLG, PM, etc.) A. I. ?

This forum is for discussing Rack Extensions. Devs are all welcome to show off their goods.
User avatar
arnigretar
Posts: 453
Joined: 15 May 2020
Location: Iceland
Contact:

13 Dec 2022

crimsonwarlock wrote:
13 Dec 2022
arnigretar wrote:
13 Dec 2022
So if players aint A.I. does that mean they are not my friends anymore? :puf_unhappy:
:lol:
Some people are friends with a doorpost, so I see no problem here :lol: :thumbup:
Hehehe :lol:
https://futuregrapher.bandcamp.com/

Reason 12, Ableton Live 10 Suite, Roland Cloud, Arturia V9, Korg Legacy 3, Soundtoys 5, Waves Mercury, Sonic Charge Bundle, N.I.: Massive, Reaktor 6, FM8. + a lot of Hardware. Windows 7/10.

User avatar
selig
RE Developer
Posts: 11739
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

13 Dec 2022

avasopht wrote:
13 Dec 2022
selig wrote:
13 Dec 2022


Noise reduction systems have been doing this for decades, reading a sample of the noise and constructing a ‘filter’ to remove it. Same for plugins like VocAlign, where they ‘learn’ the input audio and then do ‘something’ in response.
So no, not AI, at least not how we refer to it today (or at best, a highly simplified/primitive version of it?).
But machine learning does far more than create a filter, ... it can separate dialogue from crowd noise, other people taking in the background, .. the train that passes by during filming.

I'm not sure if Sony has released the remasters of the movies they restored with machine learning, but no, nothing comes close to what ML can do.
We are talking about different things - I’m only speaking of the simple technologies where you analyze a track etc like the OPs example of EQs that “learn” a spectral response (a single data set), which is not AI right? That was my point.

And you are talking about advanced AI machine learning where tons of training data is analyzed.

It’s very different to analyze one source and use THAT data to create a filter/IR, compared to using machine learning where thousands (or more) of sources are analyzed and used to build a model(s) which are used by the AI engine to ‘recognize’ patterns (like a passing train, a range of human voices, etc) and then isolate the patterns from the rest of the data so you can do something to only THAT data (like remove it, or just isolate for further separate processing).
Selig Audio, LLC

avasopht
Competition Winner
Posts: 3948
Joined: 16 Jan 2015

13 Dec 2022

selig wrote:
13 Dec 2022

We are talking about different things - I’m only speaking of the simple technologies where you analyze a track etc like the OPs example of EQs that “learn” a spectral response (a single data set), which is not AI right? That was my point.
Oh right, yes, I completely misread all of that.

User avatar
Faastwalker
Posts: 2282
Joined: 15 Jan 2015
Location: NSW, Australia

14 Dec 2022

Depends on the Player I think. Some of them feel like they just do their own thing to me. QNG I struggle with in this sense. For me it's like one of those automatic pianos in old Western movies. Hit play, and of it goes singing its own tune! You have to reign them in a bit I think. Otherwise, it's just painting by numbers, isn't it? Saying that, I love them and use them as often as possible. Especially with my modular set-up. Really cool having that kind of control of modular.

User avatar
Enlightenspeed
RE Developer
Posts: 1105
Joined: 03 Jan 2019

18 Dec 2022

avasopht wrote:
12 Dec 2022
Enlightenspeed wrote:
12 Dec 2022
There's different definitions of AI, but for all Rack Extensions it's impossible to do AI as a rule.
There's nothing stopping the use of reinforcement learning and neural networks in Rack Extensions (other than there probably is no real need).

There may be issues with saving (I've not checked out the SDK since 2016 ... so maybe things have changed by now ;)).

You will have to roll out your own library and figure out how to make it work with the RE limitations (I wrote a few classes to make allocation in real-time code fairly painless).

The real problem is making it useful.

Now that being said ... QMUL audio professor, Marcus Pearce wrote a paper (A Probabilistic Model of Meter Perception: Simulating Enculturation) that gives you one of the only practical theoretical models based on human perception of rhythm that could be used to generate rhythms on a per-genre basis.

He has written code to demonstrate the theory (not included in the paper), but the model is included.

Worth checking out some of the sources he cites as there's an interesting model (strikingly similar to what is used in Propellerhead's Bassline generator) ... plus lots of findings on the perception of melody that could help you out with generation.
Hi Avasopht,

I had a look at this paper, or at least the abstract and intro, and I think I see what you're getting at now.

Yes, you can train an object and then have that be part of a Rack Extension, you just add the object as a BLOB.

However, you can't have a RE do the machine learning. This isn't an official constraint, it's more to do with memory allocation limits. There's a max number of 1024 memory addresses that you are allowed, and once you factor in that the device still has to have lots of other thing present in order to fulfil some function, then it starts to get tight. Crucially, you can't write to a BLOB, and you are not allowed globals - so ultimately any trained data from the session stays in that session.

It's also worth noting that writing to document owner strings can't be done from the realtime, you have to have gestures to do this, and thus the machine isn't the one doing the learning :)

So, to be clear, I'm not suggesting that you can't do it because it's barred, but it's not practical to do so. Apologies that my original turn of phrase pretty much explicitly states that, my bad... yeah, I can see that my wording has absolutely suggested a constraint, sorry about that.

Cheers,
Brian

Post Reply
  • Information
  • Who is online

    Users browsing this forum: No registered users and 14 guests