I've no idea about RE development, but - based on what you know about the architecture - would it be possible for RS to add* MPE retroactively to existing RE devices? In my limited understanding, the practical result of MPE is running of multiple copies of the same instrument, with per-note expressions impacting the per-voice parameters, like oscillator settings, pitch, panning, amp & filter envelopes, filters, FX, etc.
Do you guys think it's possible at all?
----------------
* I'm assuming that all necessary changes to MIDI data stream & sequencer side will be updated to use MPE (and MIDI 2.0) in VSTs.
MPE in existing Rack Extensions
As I know, MPE contain 3 main data channels activity, as XYZ data in 3D:
- Pitch bend,
- Key position (timbre)
- Aftertouch (amplitude)
From RS may be need to create special standard of this data for receiving it from one of the MPE devices to Reason devices. This open possibilities to modulate 3 properties at once in Reason devices, receiving signal from MPE device
As I think) May be something wrong.
- Pitch bend,
- Key position (timbre)
- Aftertouch (amplitude)
From RS may be need to create special standard of this data for receiving it from one of the MPE devices to Reason devices. This open possibilities to modulate 3 properties at once in Reason devices, receiving signal from MPE device
As I think) May be something wrong.
Last edited by turn2on on 30 Nov 2023, edited 1 time in total.
- Enlightenspeed
- RE Developer
- Posts: 1112
- Joined: 03 Jan 2019
It's possible for them to open the standard but existing devices would need to then be updated so that their C++ native object/s recognise and react to the new incoming signal. It is not legal for RS to do this by themselves without the permission of the developer, and it would be a hell of a lot of effort for them to do so anyway and with exceptionally little reward.
- Enlightenspeed
- RE Developer
- Posts: 1112
- Joined: 03 Jan 2019
Difficult to know for sure, but Gorilla Engine stuff might be an exception to my statement above - it would only be the insertion of some extra scripting for predefined targets from what I can tell, so this is a bit more of a grey area as it would be purely addition.turn2on wrote: ↑30 Nov 2023As I know, MPE contain 3 main data channels activity, as XYZ data in 3D:
- Pitch bend,
- Key position (timbre)
- Aftertouch (amplitude)
Need to create standard of this data for receiving it from one of the MPE devices. This open possibilities to modulate 3 properties at once in Reason devices, receiving signal from MPE device
As I think) May be something wrong.
That's typically referred to as "pressure"turn2on wrote: ↑30 Nov 2023As I know, MPE contain 3 main data channels activity, as XYZ data in 3D:
- Pitch bend,
- Key position (timbre)
- Aftertouch (amplitude)
Need to create standard of this data for receiving it from one of the MPE devices. This open possibilities to modulate 3 properties at once in Reason devices, receiving signal from MPE device
As I think) May be something wrong.
GE is compatible with MPE.
Reason core not.
REs works with up to 1024 properties (may be bigger) at once, and already read all incoming midi data (CC, nrpn).
What needed - add to SDK MPE support, that includes 3-channels signal data.
I'm not deep to MPE theme, but nothing special for GE as example, to add control of 3 CC's at once, for any needs you need as modulation.
Reason core not.
REs works with up to 1024 properties (may be bigger) at once, and already read all incoming midi data (CC, nrpn).
What needed - add to SDK MPE support, that includes 3-channels signal data.
I'm not deep to MPE theme, but nothing special for GE as example, to add control of 3 CC's at once, for any needs you need as modulation.
Is it really "nothing special"? MPE isn't primarily about note data. That seems trivial, IMO. It's mostly about how that data is further impacting the per-voice processing of audio.
Yes, this is can be from one side - modulation story for released REs. Like a matrix for incoming MPE, where user can select what knobs are changed with MPE source.. like a 3D XYZ. Simple model for actual published RE products.
But, at first, you must have support to receive combined MPE data signal (updated SDK and Reason).
How RE receive it and what is doing - second question, for released and new REs in future.
Developer can set for every RE own way to modulate something.
Reason need to have translation of MPE device source signal to RE, question of Reason core, SDK.
How RE developer can use it?
1/ like mod source: Set of midi CC to control set of knobs as example for released REs. Why not, as lite-variant.
2/ Voice-processing by MPE - is about device architecture question, and can be used only for new RE products
At first, need to have this standard supported in Reason and SDK.
May be possible, using MPE signal as source control signal, that can be virtual property with matrix, where we can select, what to do with device control elements. As variant.
But yes, for any new RE, in this way, devs can use it as more deeper integration, set - how it works in their device architecture with voices, etc..
The way to integrate it for RE, is 2 way question for developers, that need MPE possibilities in SDK and Reason:
1/ Devs can use it for voices manipulations in future RE products, as option.
2/ Use it for already published REs, with may be default matrix slot, select what to modulate in Instruments or Effect. (Like Editor/Programator in Combinator matrix is work).
May be as first step, enough support of MPE in Combinator matrix as Source data.
This help to realise various things with MPE controller. Bigger things, need bigger changes.
But, at first, you must have support to receive combined MPE data signal (updated SDK and Reason).
How RE receive it and what is doing - second question, for released and new REs in future.
Developer can set for every RE own way to modulate something.
Reason need to have translation of MPE device source signal to RE, question of Reason core, SDK.
How RE developer can use it?
1/ like mod source: Set of midi CC to control set of knobs as example for released REs. Why not, as lite-variant.
2/ Voice-processing by MPE - is about device architecture question, and can be used only for new RE products
At first, need to have this standard supported in Reason and SDK.
May be possible, using MPE signal as source control signal, that can be virtual property with matrix, where we can select, what to do with device control elements. As variant.
But yes, for any new RE, in this way, devs can use it as more deeper integration, set - how it works in their device architecture with voices, etc..
The way to integrate it for RE, is 2 way question for developers, that need MPE possibilities in SDK and Reason:
1/ Devs can use it for voices manipulations in future RE products, as option.
2/ Use it for already published REs, with may be default matrix slot, select what to modulate in Instruments or Effect. (Like Editor/Programator in Combinator matrix is work).
May be as first step, enough support of MPE in Combinator matrix as Source data.
This help to realise various things with MPE controller. Bigger things, need bigger changes.
This is just a thoughts)
-
- Information
-
Who is online
Users browsing this forum: No registered users and 1 guest