My attempt at a transient/sustain splitter combinator (only stock devices) - any improvements?

Need some fresh sounds? Want to show off your sound design skills? Here's the place!
Sengin
Posts: 6
Joined: 21 Dec 2021

Post 21 Dec 2021

Hello, I'm pretty new to reason and I noticed that there were a couple of small things missing that would be a huge improvement. Reason is so modular that as long as you have the "small" pieces (i.e. building blocks), you can put them together to do virtually anything you want. However, when I started working with it, I noticed these missing things that made it more difficult than necessary or impossible for new users:

1) Frequency crossover/splitter

Stereo Imager can almost fulfill this role, but is limited to splitting bands from 100Hz - 6000Hz (works fantastic if you need to split in this range though). There used to be a free rack extension for this (Elements, I forget the creator) but the link 404s on the shop. I think a dedicated device would be an amazing addition (e.g. a new spider), even if it can only split in two. Needing to chain is fine - again with small pieces you can do anything even if it takes a bit more work.

While I'm talking about spider, having a switch on the back to invert the polarity of the audio splitter would be a nice addition. The invert button on the mixer channel is great but you can't put one of those in a combinator (and it can only be accessed from the mixer). You can use thor as well, but it feels like a backdoor secret and it's also like using a sledgehammer to put a tack into a cork board - way overkill. But of course, this is a minor issue since it *can* be done. Just not as discoverable.

2) Mid/side splitter

There's a factory-provider combinator for this, but unless you know it's there you don't know about it. Having a dedicated spider merger/splitter just for mid/side would be a benefit, but since the combinator exists it's probably not the largest deal (there's also at least two free rack extensions for this so it's possible to make combinator patches that do m/s processing).

3) A transient/sustain splitter

I couldn't find anything free in the shop or anything online on how to do this (though I admit I didn't looks for ages). This meant this was the worst of the 3 - no partial solution, no hidden combi, no nothing. So I went about thinking, and I came up with an idea that gets me most of the way there (perhaps a "good enough" solution for using only stock devices e.g. it's free?).

First I needed an envelope follower and a way to get that envelope as a cv signal. Sweeper works for this - you can get an attack (how quickly it follows the actual envelope) between 0.8ms and 81ms. And a release between 8ms and 808ms. If your release is fast enough, this envelope follower lags behind the actual envelope by the attack time. So you kinda sorta get an envelope that describes the sustain part of the signal. You can isolate this part of the signal if you can use this envelope to shape the signal - use the cv signal as a Voltage Controlled Amplifier. Synchronous works well for this - it has a master level cv in and it has a master level of 0.0dB as the default. So reset the device and plug the follower cv from sweeper into the master level cv in on synchronous and you have your "sustains." How to get the transients? Take out the sustains from the original signal by inverting the sustains and merging it with the original audio (using thor). If you use a spider to split the sustain before inverting, you now have 2 separate stereo signals that represent an estimate of the transients and an estimate of the sustains. You can then add whatever effect you'd like to whichever you'd like (e.g. add a small amount of revert to just the sustains to keep the transients a bit thumpier while adding "glue") and merge them back together with a spider or mixer to get your new single stereo signal.

But I don't feel this is a perfect solution. If you add no effects and invert the joined signal, it nulls out so at least I feel I did that right. But adding any amount of gain to just the transients or sustains sounds really off, completely not what I expect. And listening to the solo sustains has a louder overall loudness than listening to the full signal so I definitely feel like I messed up somewhere. It could just be a limitation with this approach to split out the transients from a sustain and how it's just an estimate, and that's what I'm hoping to talk about here. Does anyone know what might be going on? What about any improvements I can make to this design - is there a better envelope follower? Or perhaps a different way to make a transient splitter altogether with the devices available?

I attached my "beta" combinator patch to this post. None of the front panel buttons/etc do anything - I'm still trying to prove it's an idea worth pursuing first. I tried to label all the devices in a helpful way though, and there's a bypassed (by default) reverb that works on just the sustains if you want to see an example of what it would do and sound like.

Thanks!
You do not have the required permissions to view the files attached to this post.

User avatar
Benedict
Posts: 2747
Joined: 16 Jan 2015
Location: Gold Coast, Australia

Post 22 Dec 2021

I get that audio atom-splitting is the rage at the moment, but I Q whether is it really a great plan as while it can seem to solve a problem, it creates a whole lot of others that are less natural than the solution that is to be had - as you are discovering. This is why it is often better to go back to the real basics and find a workable solution that feels natural within the actual mix. Often that can be in looking in the opposite direction or in doing several small but cumulative things that deliver the story better than a head-on that breaks the sense of magic that mixed music needs.

:-)
Benedict Roff-Marsh
Completely burned and gone

Sengin
Posts: 6
Joined: 21 Dec 2021

Post 22 Dec 2021

That depends on what you want to do. If you are looking to create something natural, then sure. But if you are just looking to experiment or create something neat, why should you limit yourself? Sound design is essentially limitless, and thus limiting yourself to not play around with something hurts you. What if I want just my sustains' side channel to be slightly emphasized to evoke a feeling of unease or floating? Sure, there are other ways to do that, but maybe this is the one that suits my goal best. Or not - but the only way to know for sure is to try it and see what happens.

If something doesn't sound good, then obviously don't use it. But this would be just another tool - just because Scream exists doesn't mean you need to use it on every instrument.

User avatar
Benedict
Posts: 2747
Joined: 16 Jan 2015
Location: Gold Coast, Australia

Post 23 Dec 2021

Oh I do use Scream4 on many, many, many things. A client who sits with me as I mix jokes about how I scream at everything he does.

In no way did I paint with such a black & white brush to say that it could/should never be done. But I did indicate that with all of these things, there is often a tradeoff that is worrying. Straight compression has a tradeoff but its process is an analog of (mirrors) how the human experience of sound works so generally as a solution, it works very well.

However, the modern breed of slice it up, poke it, glue it back together tools raise a set of issues that are very different from the way sound works naturally for us. This can be great if used for an effect that otherwise is non-existent - Autotune for Cher on "Believe", like a Vocoder only not. Really very cool. It feels very natural in that situation.

But too often these things are used to solve the wrong things like taking singers who are too lazy to learn their craft and pretend they are good, or worse taking adequate singers and making them and their songs sound like shite with all humanity stripped. Or the guy I encountered recently who was all-over proud telling us about how he took old crowd recordings of Elton John concerts and was using piles of spectral processing to make them as pristine as studio recordings when in reality they sounded terrible - like Elton was put through a cheese grater, and reassembled with crazy glue. Fine if that was just some lone loony, but it is not. It is terribly common that people who could have made a record worth hearing destroyed their own story with processing that hid what they were supposed to be there to do. Sad.

Now I have not heard your thing. I only go on what you said - and that is that you have hit some issues where the solution creates more problems. Ok that may lead you to the next wave of interesting processors, or it may well lead you to a dead-end (my worry and warning). Only you can know that. Hating on someone who takes time to honor you and your process by giving their time and experience is not a cool use of anything methinks ;-)

All the best and I will listen if/when you show your work and what it can do with an open mind for what new "natural" results it can bring for what I do.

:-)
Benedict Roff-Marsh
Completely burned and gone

Sengin
Posts: 6
Joined: 21 Dec 2021

Post 23 Dec 2021

Benedict wrote:
23 Dec 2021
and that is that you have hit some issues where the solution creates more problems.
I think you are misunderstanding what I'm saying. I'm saying that I was attempting to do this, but am pretty obviously doing something wrong (and therefore that I do not actually have a solution). It is probably wrong because 1) even a very slight gain doesn't sound like gain, and 2) soloing the sustains is louder than combining. All I'm trying to do is come up with a way to split transients and sustains into separate signals so that anyone wanting to experiment with it can do so without needing to spend more money. I was hoping to use this community for help here, as a lot of you have been around the block for a while and might know what I'm doing wrong or have ideas for improving it. I am not trying to say I came up with a solution that works or some new processing technique, and that everybody should use it.

As for showing my work, I included the combinator with my first post (which includes a reverb device you can switch from bypass to on to just affect the "sustains"). I'm not really sure what you are meaning here.

User avatar
Benedict
Posts: 2747
Joined: 16 Jan 2015
Location: Gold Coast, Australia

Post 23 Dec 2021

Thank you.

I am trying to engage with you. The path I am coming in on may be different from the one you expected people to be on but such is the joy of living. I often also try to get people to stop looking at their situation Head-On - the obviously "sensible" way and to come at things sideways, the away life often really works - go listen to Garth Brooks "Thank God For Unanswered Prayers" for a hint on that. Maybe coming at it sideways will help you find what you really need here.

Part of my point is that I think that what you are trying to do may just be broken from the start as if it is supposed to be like those creepy new (yes judgy words) splitter EQs then they are done via FFT and/or IIR, whereas what happens in the Rack is pure old-school simple real-world audio physics.

Rather than asking people to d'load and stumble about with a Combi that you admit doesn't really work, why not make a screencast presentation showing a) what you are trying to do, b) why, c) your method, and d) results so far. That puts you in control of the dialog you are hoping to have. It may also just help you to put your whole bag into a box - so to speak - which then helps you to decide what to do next.

:-)

p.s. OBS is your friend - once you swap Reason to use the Default System Audio Driver that is.
Benedict Roff-Marsh
Completely burned and gone

User avatar
guitfnky
Posts: 4182
Joined: 19 Jan 2015

Post 23 Dec 2021

I'm not sure what a screen capture would add that hasn't already been clearly stated. the end goal is a combi that splits the transient and sustain portions of a signal so they can be processed separately. easy peasy from an explanation standpoint—difficult from the standpoint of getting it working.

I did something sort of similar in a recent combi, but instead split esses from vocal tracks so you could add high frequencies in an EQ only to the non-esses. also, I very much cheated using Selig's de esser to handle the identification of the esses.

anyway, I digress...I think this is a very worthwhile goal, and it sounds like you might be on the right track to a workable solution. I won't have time to check out the combi for a few days, but if you or someone else hasn't figured it out, I'm definitely curious to check it out.

one thing that might help is to consider also using free REs to help expand available options when hitting roadblocks with stock devices. one that springs to mind is the Morfin XF crossfader. it has polarity switching built in, and I could maybe see using the crossfade functionality itself to help with mixing the attack and sustain portions of the signal.
I write good music for good people

https://slow-robot.com/
https://slowrobot.bandcamp.com/

User avatar
deeplink
Posts: 734
Joined: 08 Jul 2020
Location: Dubai / Cape Town

Post 24 Dec 2021

Sengin wrote:
21 Dec 2021

But I don't feel this is a perfect solution. If you add no effects and invert the joined signal, it nulls out so at least I feel I did that right. But adding any amount of gain to just the transients or sustains sounds really off, completely not what I expect. And listening to the solo sustains has a louder overall loudness than listening to the full signal so I definitely feel like I messed up somewhere. It could just be a limitation with this approach to split out the transients from a sustain and how it's just an estimate, and that's what I'm hoping to talk about here. Does anyone know what might be going on? What about any improvements I can make to this design - is there a better envelope follower? Or perhaps a different way to make a transient splitter altogether with the devices available?
Thanks!

You are on the right track. Using sweeper/synchronous is an interesting approach. Looking at your device I think all you needed is few more mix channels and copies of the signal so that it stays at the right levels when you mute.

Another way? Well I did something similar for the 1U-FX Series, where I included a solo transient to the Transient Shaper unit. Instead of a an envelope follower I used Kong's transient shaper.

I've slightly modded the above mentioned device to include independent mute/solo for the attack and decay(sustain), and included sends for each one.
You do not have the required permissions to view the files attached to this post.
Get Combinators, ReFills and RS Giveaways at the Shared GoogleDrive: [deeplink] Open RS-Project

User avatar
selig
RE Developer
Posts: 9994
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

Post 24 Dec 2021

I’ve done this since I first got Reaktor many years ago. I have a dirt simple approach in which I use an envelope follower to pan the signal to left with soft signals and right with loud. That’s it, now you have all the soft signals on one channel and all the loud on another, with perfect reconstruction when summed back together.
If there is a limitation of doing the same in Reason it’s that CV is not at audio rate and there are delays when using CV between multiple devices.
Its complicated but you can use the pan in a Line Mixer to “split” the signal (complicated when routing stereo signals), since it has a 6 dB curve, which is what you want for this task. You’ll also need look ahead to compensate for the CV delays/response.
I do like using Sweeper for the control signal, even over my previous favorite MClass comp.
FWIW, one of my next device ideas is for a dynamic splitter which works like a frequency splitter/crossover but across the dynamic range. It will have two to four bands and two crossover modes as planned (if there’s interest).
Selig Audio, LLC

Sengin
Posts: 6
Joined: 21 Dec 2021

Post 25 Dec 2021

Thanks everyone for the discussion :)
guitfnky wrote:
23 Dec 2021
one thing that might help is to consider also using free REs
Agreed! I don't have problems with using free REs, but I do prefer using stock devices over them if possible (it is fewer steps for the end user). I'm sure there are other reasons though - like I'm sure Thor uses CPU even when using it solely as an audio inverter, and a dedicated free RE would be less intensive. But another downside is that what is free now may not be free in the future, or even exist (like Elements - the frequency crossover/splitter RE that you can't get to on the store anymore :().
guitfnky wrote:
23 Dec 2021
using the crossfade functionality itself to help with mixing the attack and sustain portions of the signal
What benefit does the Morfin XF Crossfader give over using e.g. a 6:2 mixer? I suppose it would have a better balance option, since e.g. a 6:2 mixer has unity gain at 100?
deeplink wrote:
24 Dec 2021
You are on the right track. Using sweeper/synchronous is an interesting approach. Looking at your device I think all you needed is few more mix channels and copies of the signal so that it stays at the right levels when you mute.
That's what I'm struggling with - how do I know what levels to set? Obviously I can use my ears, but it also means that there should be a math/logic problem to solve to figure out how much level is needed for each signal. That's where my knowledge breaks down - I'm simply subtracting one waveform from another so I don't understand what I would need to do in order to bring the level to what is expected. Do you know anything about the details here? Or is this something like the line mixer is adding gain or normalizing volume?
deeplink wrote:
24 Dec 2021
I've slightly modded the above mentioned device to include independent mute/solo for the attack and decay(sustain), and included sends for each one.
Sweet - thanks! I'll definitely take a look at this.
selig wrote:
24 Dec 2021
If there is a limitation of doing the same in Reason it’s that CV is not at audio rate and there are delays when using CV between multiple devices.
As someone new to reason (well, I used it briefly back in the reason 3.0 days...), I didn't know this. That kinda sucks... I was wondering about this since I started with a different approach - using two different CV curves one each for transient/sustain (instead of subtracting one audio from another). I couldn't get it to null, but it was close. This small discrepancy might be because of this delay and resolution difference.
selig wrote:
24 Dec 2021
Its complicated but you can use the pan in a Line Mixer to “split” the signal (complicated when routing stereo signals), since it has a 6 dB curve, which is what you want for this task.
Can you elaborate a bit on this? I think you think I know more than I actually know :). I know a bit about dsp processing, but not how that translates to reason devices.
selig wrote:
24 Dec 2021
You’ll also need look ahead to compensate for the CV delays/response.
Are there hard numbers for this? E.g. the cv delay per device is X milliseconds always? How can I compensate for this delay? I don't know of a way to tell reason that a signal path should be treated with any amount of latency. I would like to add this to try something else: work on a delayed envelope. The reason being is that even though the envelope reacts slower that the audio signal based on the amount of attack in Sweeper, the "origin" of the audio still starts at zero. So the 'slope' of the attack changes, but they both originate at sample 0. That leaves part of the attack in the sustains on my current approach. What I'd like to do is something like use a DDL with a low delay (no feedback and pure wet) to capture this part of the attack as well (essentially padding the attack envelope with samples of 0). But without being able to tell reason I want this delay to be part of the latency, I would just end up delaying everything by a millisecond or two. Is there a way to add latency to a signal path without developing your own RE? And is there a better (higher resolution) delay than the DDL (which has a resolution of 1 ms and a lowest delay of 1 ms)?
selig wrote:
24 Dec 2021
FWIW, one of my next device ideas is for a dynamic splitter which works like a frequency splitter/crossover but across the dynamic range. It will have two to four bands and two crossover modes as planned (if there’s interest).
That actually sounds quite awesome, especially from a sound design perspective! I enjoy finding all the ways to split a signal - operating on individual "channels" gives so many interesting possibilities.

User avatar
selig
RE Developer
Posts: 9994
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

Post 28 Dec 2021

WALL OF TEXT ALERT!!!
Sengin wrote:
25 Dec 2021
selig wrote:
24 Dec 2021
Its complicated but you can use the pan in a Line Mixer to “split” the signal (complicated when routing stereo signals), since it has a 6 dB curve, which is what you want for this task.
Can you elaborate a bit on this? I think you think I know more than I actually know :). I know a bit about dsp processing, but not how that translates to reason devices.
selig wrote:
24 Dec 2021
You’ll also need look ahead to compensate for the CV delays/response.
Are there hard numbers for this? E.g. the cv delay per device is X milliseconds always? How can I compensate for this delay? I don't know of a way to tell reason that a signal path should be treated with any amount of latency. I would like to add this to try something else: work on a delayed envelope. The reason being is that even though the envelope reacts slower that the audio signal based on the amount of attack in Sweeper, the "origin" of the audio still starts at zero. So the 'slope' of the attack changes, but they both originate at sample 0. That leaves part of the attack in the sustains on my current approach. What I'd like to do is something like use a DDL with a low delay (no feedback and pure wet) to capture this part of the attack as well (essentially padding the attack envelope with samples of 0). But without being able to tell reason I want this delay to be part of the latency, I would just end up delaying everything by a millisecond or two. Is there a way to add latency to a signal path without developing your own RE? And is there a better (higher resolution) delay than the DDL (which has a resolution of 1 ms and a lowest delay of 1 ms)?
OK, to clarify: what you are trying to do (I'm assuming) is to take a signal and split it into two paths, that when recombined sound exactly like the original - UNTIL you change something.
Just like with frequency splitters/crossover, ideally the bands/channels should re-combine to the original signal with no changes whatsoever.
It's possible to use this approach for different effects. For example I use the "panning" approach I previously mentioned for the Selig DeEsser to split the signal into two separate channels that recombine to the exact copy of the original. In the case of the DeEsser, sibilance detection causes the input to be switched to the opposite channel, and no detection causes it to be left alone in the original channel.
Imagine the conveyer belts that sort packages in large sorting facilities. To simplify, let's say all the boxes above a certain size are sent down one belt, and the smaller boxes down another. This would be a "one to many" splitter, but rather than each split being simultaneously available as in a "mult" type splitter, this design only allows one path or the other - not both (you can't send a package to TWO locations!).
Back to the audio example and building on that concept, using a panner instead of a hard switch DOES allow you to send one signal to the "in-between" parts of the two channels just like panning a mono signal to any position between left and right. In this case our package analogy breaks down and we need something more like water flow. But lets stick to audio…
Using a panner to send one signal to two places works great for a dynamic splitter. All we need is to measure the original signal and when it is at it's lowest level we pan it to the first channel, and when it is at its highest level we send it to the second channel. Using a simplified example of a 100 dB dynamic range, everything at -100 dB is sent to the first channel, and everything at 0 db (loudest) is sent to the other. A signal at -50 dB would therefore be send equally to both channels. Making sense so far?
Here's were we get more specific. Because we do not want to alter the signal as we split it, the panner used MUST have a pan law of -6dB, meaning in the center position it must lower the signal by 6dB for both channels. Why 6dB? Because this is the amount of addition gain when duplicating a signal. For example, if you add a parallel channel in Reason, you add 6dB overall gain to that signal. This should be compensated by pulling both faders down 6dB though most folks do not do this - and this causes things to sound better because "better sounds louder" - but really, it's not better, it's just louder!
So now, using your envelope follower to measure the loudness, and using the envelope CV to "convert" loudness to panning, you have a simple dynamic splitter with is guaranteed to NOT change the level of the signal no matter the input level. You don't even need to null test it, you already know it CANNOT change things by design. This is why the design is so simple and elegant - it does what it says (splits the signal into loud/soft channels) without needing any calibration or level adjustment whatsoever. I'm a huge fan of making things as simple as possible but not simpler, to quote Einstein here.
As for look ahead, you can use similar techniques to sample trimming before there were visual editors, which is to find the start of the sample by moving the sample END to the start then moving it back while playing the sample until you hear the first "blip" of the attack" - which becomes the new start position. In this case, we solo the soft channel and if we hear any attack (peak/transient) we know we need more delay. What happens is the CV is delayed so it switches "late" compared to the audio path. This causes the peak to incorrectly be sent down the soft channel instead of the loud channel. So you increase delay one ms at a time until the peak disappears from the soft channel.
Better still, use a single drum sample and record both soft and loud audio channels to new audio tracks, which will allow you to measure the appropriate delay of the CV signal and apply a similar delay to the audio path to "align" both signals. In Reason this delay is often longer than 10 ms, which is pretty substantial - ideal this should be possible to accomplish in a few ms or so in a commercially released product.
Selig Audio, LLC

Sengin
Posts: 6
Joined: 21 Dec 2021

Post 31 Dec 2021

selig wrote:
28 Dec 2021
WALL OF TEXT ALERT!!!
Oh! I think the implication here is that you need to create your panner twice - split your stereo signal into two mono channels (L and R) and then run each of those through an envelope follower. Then use that envelope follower as CV to control pan (and not volume like I was doing, ) on a line mixer, as the line mixer follows the 6dB pan law (do both the 6:2 and 14:2 mixers follow this? I assume so...). Hopefully the CV works as intended - e.g. Synchronous outputs unipolar CV for the envelope follower output so we would hope that 0 = -127 pan on the mixer, 0.5 = 64 pan (centered), and 1.00 = 127 pan. You are then left with 4 signals - L transients, R transients, L sustains, and R sustains (ish, as accurate as the envelope follower approach works). Is this what you were meaning?

As for the lookahead, I'm still not following (punintended). When operating in the rack, you only have access to the current sample, right? How can you tell Reason "I need a latency of 10ms/I need to buffer 10ms of samples/I need 10ms of sample buffer to work" through cv cables and devices? Could do you some crazy stuff like insert a maximizer with 4ms lookahead and no gain into your device chain somewhere to add this latency? Then stack X of these to serve your needs (e.g. 4ms x 3 = 12ms)? I definitely don't see how you can do the 'probes' for signal to determine delay without introducing your own rack extension/vst. And yeah, 10ms+ seems pretty long for a DAW - I would have assumed it was in 'realtime' (no delay) if you hadn't mentioned it.

Another thing I don't understand is how the panning approach would be more accurate than the volume approach. Shouldn't they both yield the same results as both use the envelope follower as the source of the 'split'? Or are you just saying that it's simpler this way as you only need a mixer and 2 envelope followers, rather than an audio inverter?

Thanks for walking me through this! :)

User avatar
selig
RE Developer
Posts: 9994
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

Post 31 Dec 2021

Sengin wrote:
31 Dec 2021
selig wrote:
28 Dec 2021
WALL OF TEXT ALERT!!!
Oh! I think the implication here is that you need to create your panner twice - split your stereo signal into two mono channels (L and R) and then run each of those through an envelope follower. Then use that envelope follower as CV to control pan (and not volume like I was doing, ) on a line mixer, as the line mixer follows the 6dB pan law (do both the 6:2 and 14:2 mixers follow this? I assume so...). Hopefully the CV works as intended - e.g. Synchronous outputs unipolar CV for the envelope follower output so we would hope that 0 = -127 pan on the mixer, 0.5 = 64 pan (centered), and 1.00 = 127 pan. You are then left with 4 signals - L transients, R transients, L sustains, and R sustains (ish, as accurate as the envelope follower approach works). Is this what you were meaning?

As for the lookahead, I'm still not following (punintended). When operating in the rack, you only have access to the current sample, right? How can you tell Reason "I need a latency of 10ms/I need to buffer 10ms of samples/I need 10ms of sample buffer to work" through cv cables and devices? Could do you some crazy stuff like insert a maximizer with 4ms lookahead and no gain into your device chain somewhere to add this latency? Then stack X of these to serve your needs (e.g. 4ms x 3 = 12ms)? I definitely don't see how you can do the 'probes' for signal to determine delay without introducing your own rack extension/vst. And yeah, 10ms+ seems pretty long for a DAW - I would have assumed it was in 'realtime' (no delay) if you hadn't mentioned it.

Another thing I don't understand is how the panning approach would be more accurate than the volume approach. Shouldn't they both yield the same results as both use the envelope follower as the source of the 'split'? Or are you just saying that it's simpler this way as you only need a mixer and 2 envelope followers, rather than an audio inverter?

Thanks for walking me through this! :)
When processing stereo signals its common to take a mono control signal. This is done because if you change the gain of one side of a stereo signal and not the other (or by different amounts) the center image will ‘wander’ (move around) which is not what you likely expect to happen. You definitely need two panners because you need two channels for stereo signals. For unipolar sources such as envelope followers you need to start with the pan knob all the way left, and test the system to make sure a full code signal fully pans to the right. Its handy to have a CV tool that shows the exact value to fine tune the settings.

Look ahead is actually ‘look behind’. By delaying the main audio path, you effectively place the control ‘ahead’ of the audio path. In Reason, you need a delay with millisecond accuracy AND stereo operation. The DDL-1 is not stereo (observe the routing icons on the back), so you need two for stereo.

Using level control requires precise calibration such that there is never a discrepancy between the values. With a panner, this happens automatically, and in fact you cannot ever have more or less total signal at the output. It’s 100% gonna work no matter what you do, and it’s simpler which is always desirable IMO. I didn’t look at your approach so cannot comment specifically on your approach other than it sounded overly complicated to me.
As there is only one envelope to follow (the original signal) you only need one envelope follower.
Selig Audio, LLC

FizbanLaPolio
Posts: 12
Joined: 28 May 2018

Post 02 Jan 2022

Maybe I didn't understood something, but what you want to recreate is something like UnfilteredAudio G8 Gate can do ?
https://www.reasonstudios.com/shop/rack ... amic-gate/

It has a threshold that can be played with, and on the back there is 2 output :
- one for the signal detected by the compressor (the transient being over the thresold)
- one for the rejected part of the signal (what was under the threshold, so your sustain)

User avatar
jam-s
Posts: 2008
Joined: 17 Apr 2015
Location: Aachen, Germany

Post 02 Jan 2022

Alternatively it could also be a use case for this RE: https://www.reasonstudios.com/shop/rack ... -splitter/
If you're in Aachen, come and visit us at the Voidspace. ... Pool's closed due to corona.

User avatar
selig
RE Developer
Posts: 9994
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

Post 03 Jan 2022

Finally getting a chance to check out your Combinator. The first issue I see is Syncronous adds gain to the signal depending on the input. And that means when you mute the 'transients' in line mixer channel 1, you hear the increase in level. Why does adding a channel decrease the level? Because for the transient channel you invert the signal and add it to a non inverted signal. So while the final result is unity, changing something (which is the whole reason for this device) causes unexpected results. This means turning down the transient channel even slightly causes the rest of the signal to INCREASE, as one example of an unexpected result.
Ideally, if you mute the sustain channel you should still see the same peak level as before since the peaks are in the channel that's not muted. Alternatively, if you mute the transient channel the overall peak level should go DOWN since you have removed the loudest parts of the original signal.
When you listen to just your transients channel you see a drop of 6 dB and the sound is not at all transient - it sounds 'soft' and indeed there is a smooth rise over many milliseconds. So the transient channel doesn't have any transients, but the sustain channel almost nulls agains the original input.
Bottom line, it's time to re-think what you're trying to achieve and find a better (ideally more simple) approach.
And I would begin by asking what EXACTLY do you want the results to be?

Fun exploration that works using polarity regarding transients: start with two identical compressors (MCLass works great for this) each with different attack times and all other settings the same. Invert the polarity of the compressor with the slower attack and you get the transients isolated. Add this signal to the original and you increase the transients. Do the same with another pair of compressors, but this time have one release long and the other short. This gives you the sustain. Add this signal to the original to increase sustain. You've just made a transient shaper!
Cutting to the chase, here's one I built years ago:
https://www.dropbox.com/s/xflkvte1y2vra ... b.zip?dl=0
Selig Audio, LLC

User avatar
selig
RE Developer
Posts: 9994
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

Post 03 Jan 2022

jam-s wrote:
02 Jan 2022
Alternatively it could also be a use case for this RE: https://www.reasonstudios.com/shop/rack ... -splitter/
Both the gate and splitter use a threshold based approach, which means a fairly quick transition as levels increase. Ideally the transition should be able to be set from a gradual transition across the entire dynamic range to a transition across a few dB. If OTOH you measure the transition in 'time' (ms for example), it means it's not level dependent but rather involves some sort of smoothing over time. This means with long transition settings a very short/sharp transient may peak well above the threshold but never actually end up in the desired output channel due to a slow transition response. Also, slowing the transition time can cause transients to appear in the 'sustain' channel, so you'll need to keep the transition times (or attack times on the gate) to as low as possible to track transients accurately, which may not always be the desired results
For that reason I prefer an approach that gives you the output based completely on level, not on time, meaning if the signal is above the threshold for even a few ms it should appear in the associated output channel in all cases no matter the settings. Hopefully my continued ramblings are making some sense… ;)
Selig Audio, LLC

  • Information
  • Who is online

    Users browsing this forum: Trendiction [Bot] and 0 guests