Parallel channels in the rack
Some of you guys still are misunderstanding what Selig and Normen are talking about. That first video shows a parallel channel WITHOUT compensation on the original sound. No one is arguing that they can't hear phasing issues in that scenario. What they are saying is that they can't hear the small delay when you have a kick drum delayed by half a millisecond compared to the other drums. The guy that posted the video did something completely different, he delayed the parallel kick drum channel and mixed it with the original kick. Of course you can hear that. But that's not what anyone was arguing about until someone misread the posts and went on a tangent
Guess I wasn't being more clear after all - we are talking about completely different things here. You keep talking about parallel channels to which I've already agreed - but that's not what "I" was talking about. Sorry for any confusion on my part, but still I don't think you're hearing what I'm saying.submonsterz wrote:it matters though when you are paralleling something you need punch and clarity and it is dulling the sound ie compression on parallel that's latent it makes a huge difference .selig wrote:I said what I said, but you heard something different - you thought I said I can't hear the PHASE CANCELLATION between TWO otherwise identical tracks when one is delayed by 20 samples - I can, of course.submonsterz wrote:hmm here`s 2 and then 20 I can hear and see the difference as plain as daylight ...... at 44.1selig wrote:If anyone can hear/feel 20 samples of latency in this scenario, then they are a better listener than I am. That amount of delay is at MOST half a millisecond (at 44.1 kHz), and less than that if using higher sample rates.
Sent from my iPad using Tapatalk
if you cant then wow
What I actually said was that in the specific case that is being discussed in THIS thread, I can't. That case is where the bass track is 20 samples away from the kick track. It's a feel thing, and I simply can't tell when one track out of many is delayed by 20 samples (half a millisecond) compared to the rest. Once you get up to 5-10 ms or more, things can become easier to hear - but 1/2 ms? No way, I'm not afraid to admit it!
Hopefully I'm being more clear now!
and it also makes a difference with re`s that do beat repeats etc if your running it from a latent device etc it does not catch the start transient etc etc .
if you didn't correct the bass and you was running the three through certain plugs like I mention beat repeaters etc you get bass on target and the rest not caught etc etc.
I could go on but ill leave it there...
Selig Audio, LLC
^^THIS^^kloeckno wrote:Some of you guys still are misunderstanding what Selig and Normen are talking about. That first video shows a parallel channel WITHOUT compensation on the original sound. No one is arguing that they can't hear phasing issues in that scenario. What they are saying is that they can't hear the small delay when you have a kick drum delayed by half a millisecond compared to the other drums. The guy that posted the video did something completely different, he delayed the parallel kick drum channel and mixed it with the original kick. Of course you can hear that. But that's not what anyone was arguing about until someone misread the posts and went on a tangent
Selig Audio, LLC
Even when the signal is not identical to, the delay is audible and spoil the transients. Just to give a parallel channel compression, filter, eq, or anything else that has a delay and you have to compensate for a good sound. I know that the VMG-01 can do it, but it's a damn nuisance. This extension is not easy to set up - instead of one knob we have arrows to set the delay separately for each digit. Setting the ear is tiring. I think that this is a great oversight parallel channel and something that discourages me greatly to work with Reason.normen wrote: Again, we're talking about different things here. Again again, if you have the SAME SIGNAL, i.e. a parallel channel with THE SAME SIGNAL and introduce only a minute delay you get phasing issues. This is what everybody here agrees is very audible and a problem. And its exactly the problem the VMG-01 is made to solve.
If you have TWO SEPARATE SIGNALS, for example a snare drum and a bass drum or a bass and a bass drum, shifting one or the other by HALF A MILLISECOND will not be audible (except to submonsterz) or even change the "feel" of the part, that only becomes a "problem" when you have 5-10ms of difference.
The discussion came up because if you have a parallel channel which has e.g. 20 samples delay (1/2ms) and use the VMG-01 on the original channel, delaying it by 20 samples as well to avoid the phasing issues, the logical conclusion is that the whole part will now be 20 samples late in respect to the rest of the arrangement. Which as I and others argue is not really an issue given such short delays.
If we go back to the golden days of synths where everything was triggered by MIDI signals, often in MIDI THRU configurations - the original MIDI protocol itself causes almost 1 ms delay PER NOTE. This can become a (well known) issue if you have many synths all on one line but for a "normal" 8-Channel production nobody ever sweated about this.
In the case of completely different sounds, the delay can add cool swing and it is OK, but they are not parallel channels.
Ableton Live Suite 10 / Reason 10 / Windows 10 / Fingers - also 10
One parameter in Reason can't have values from 0-99999 with integer increments, thats why the digits are separate.michal22 wrote:Even when the signal is not identical to, the delay is audible and spoil the transients. Just to give a parallel channel compression, filter, eq, or anything else that has a delay and you have to compensate for a good sound. I know that the VMG-01 can do it, but it's a damn nuisance. This extension is not easy to set up - instead of one knob we have arrows to set the delay separately for each digit. Setting the ear is tiring. I think that this is a great oversight parallel channel and something that discourages me greatly to work with Reason.
In the case of completely different sounds, the delay can add cool swing and it is OK, but they are not parallel channels.
Ok, I understand. It is a pity that the SDK does not allow it.normen wrote: One parameter in Reason can't have values from 0-99999 with integer increments, thats why the digits are separate.
Ableton Live Suite 10 / Reason 10 / Windows 10 / Fingers - also 10
- Exowildebeest
- Posts: 1553
- Joined: 16 Jan 2015
It's not just that - ideally, it would have numeric keyboard input.normen wrote:One parameter in Reason can't have values from 0-99999 with integer increments, thats why the digits are separate.michal22 wrote:Even when the signal is not identical to, the delay is audible and spoil the transients. Just to give a parallel channel compression, filter, eq, or anything else that has a delay and you have to compensate for a good sound. I know that the VMG-01 can do it, but it's a damn nuisance. This extension is not easy to set up - instead of one knob we have arrows to set the delay separately for each digit. Setting the ear is tiring. I think that this is a great oversight parallel channel and something that discourages me greatly to work with Reason.
In the case of completely different sounds, the delay can add cool swing and it is OK, but they are not parallel channels.
But really, I'm enjoying the VMG as it is. If it's too much of a nuisance to fix it with the VMG, I'm probably doing something stupidly complex that I'm better off not doing.
I recommend to play sampler with the possibility of moving samples. I used Kaosspad Korg KP3. For example, in bank A I have my sampel. I set the compression and record it to the bank B. Now I have sampel without compression in the bank A and with the compression in the bank B. Sampel in the bank B is slightly delayed. I run the function for moving samples. With a single knob I'm moving this sampel with the hearing. When is greasy - I leave result. This sampel I ripped to the bank C. Everything works live without downtime. Such work is very pleasant and fast. I miss that the in the Reason.
If the automatic delay compensation is not possible I would like to at least such a solution - the knob (+/-) on each channel SSL mixer.
Manual settings also provide a greater ability to create creative effects.
If the automatic delay compensation is not possible I would like to at least such a solution - the knob (+/-) on each channel SSL mixer.
Manual settings also provide a greater ability to create creative effects.
Ableton Live Suite 10 / Reason 10 / Windows 10 / Fingers - also 10
In a nutshell: hearing phase issues <> hearing timing issues
- Exowildebeest
- Posts: 1553
- Joined: 16 Jan 2015
No you silly Just for the amount of samples as a whole.normen wrote:For each separate digit? Wouldn't make much of a difference would it?Exowildebeest wrote:It's not just that - ideally, it would have numeric keyboard input.
Except "Haas".guitfnky wrote:In a nutshell: hearing phase issues <> hearing timing issues
Haas stated that very short delays are imperceptible as a "delay". They instead contribute to the location of the source when heard in stereo. When heard along with the original they contribute to the timbre/tone of the sound, aka "phasing".
What you can certainly say on a purely technical level is that phase = delay. But just like with anything else, when you move beyond a certain range we humans start to "hear" it differently. Slow a square wave down to 10 Hz and we no longer hear a PITCH, we hear a RHYTHM. Shorten a delay enough and we no longer hear the rhythm created by the delay, we hear a phase issue. Distort a signal enough and we no longer hear that signal, we hear mostly noise. Etc.
Selig Audio, LLC
Does Haas go hand-in-hand with the phasing thing though? Meaning, doesn't that require use of the same signal occurring twice (two instances of an identical waveform), with just a really short delay for one of the signals, so that it's perceived differently than just one lone signal (single instance of the waveform)?
All I was really trying to get at was a shortened version of "hey folks, stop trying to compare apples and oranges! Selig and Normen are only talking about oranges, which are far more delicious and important!"
All I was really trying to get at was a shortened version of "hey folks, stop trying to compare apples and oranges! Selig and Normen are only talking about oranges, which are far more delicious and important!"
Last edited by guitfnky on 09 May 2016, edited 1 time in total.
Or to put it another way, phase issues affect how we perceive the sound quality; timing issues affect how we perceive the performance quality.
Haas really only comes into play if you have two separate sound emitters (in space), so hard stereo panning, multiple speakers etc. Its very important in setting up speaker systems for example, for the delay of the speakers in the back (delay line) you want to hit the sweet spot where they are just about imperceptible, i.e. the sound seems to come from the main PA still but the delay line adds to the volume and impact. If the delay is to short the sound seems to come from the delay line speakers, if its too long the delay line causes an audible echo.guitfnky wrote:Does Haas go hand-in-hand with the phasing thing though?
Gotcha, so it's different than phasing in that you're not only dealing with two identical signals, one very slightly delayed, but there also must be more than one speaker involved.normen wrote:Haas really only comes into play if you have two separate sound emitters (in space), so hard stereo panning, multiple speakers etc. Its very important in setting up speaker systems for example, for the delay of the speakers in the back (delay line) you want to hit the sweet spot where they are just about imperceptible, i.e. the sound seems to come from the main PA still but the delay line adds to the volume and impact. If the delay is to short the sound seems to come from the delay line speakers, if its too long the delay line causes an audible echo.guitfnky wrote:Does Haas go hand-in-hand with the phasing thing though?
No, not strictly speaking. There are two aspects of the experiments Haas, Wallach, and others performed. Here's the specific bit I'm referring to here:guitfnky wrote:Gotcha, so it's different than phasing in that you're not only dealing with two identical signals, one very slightly delayed, but there also must be more than one speaker involved.normen wrote:Haas really only comes into play if you have two separate sound emitters (in space), so hard stereo panning, multiple speakers etc. Its very important in setting up speaker systems for example, for the delay of the speakers in the back (delay line) you want to hit the sweet spot where they are just about imperceptible, i.e. the sound seems to come from the main PA still but the delay line adds to the volume and impact. If the delay is to short the sound seems to come from the delay line speakers, if its too long the delay line causes an audible echo.guitfnky wrote:Does Haas go hand-in-hand with the phasing thing though?
"The "precedence effect" was described and named in 1949 by Wallach et al.[3] They showed that when two identical sounds are presented in close succession they will be heard as a single fused sound. In their experiments, fusion occurred when the lag between the two sounds was in the range 1 to 5 ms for clicks, and up to 40 ms for more complex sounds such as speech or piano music. When the lag was longer, the second sound was heard as an echo."
It is this phenomenon that allows the Haas panning effect (which DOES involve multiple sources panned to different locations) to actually work without sounding like a "delay".
For location purposes, yes there must be two speakers involved. But the underlying concept is that either with one or two sources, the delay, if short enough, will not be perceived as a delay but rather appear fused as a single sound source. The same way that a series of clicks, if close enough together, will appear to create a pitch - but if not, will simply be perceived as a series of clicks.
Selig Audio, LLC
I guess that's sort of what I mean, when I say 'phasing'... when you're hearing phasing, your brain isn't perceiving two discrete sources of sound (even if logically, you know that's what's going on), they're hearing the weird, phasey version of what sounds like a single source.selig wrote:
...the delay, if short enough, will not be perceived as a delay but rather appear fused as a single sound source. The same way that a series of clicks, if close enough together, will appear to create a pitch - but if not, will simply be perceived as a series of clicks.
So, back to my original summation; a delay that would otherwise only be enough to cause phasing issues isn't going to perceptibly alter the timing of the actual performance.
I'm always forgetting that. I think that's why I shifted so quickly to doing almost everything within the box (the idea of having to figure out how to sort out phasing issues on drum mics always scared the hell out of me). I could probably figure that out nowadays, with all the YouTube and Google help available, but when I was just starting out, the whole phase thing was one of the most daunting concepts I'd encountered (the ease of flipping top and bottom snare phase notwithstanding).Stranger. wrote:Phase is really interesting actually in more than a nerdy way.guitfnky wrote:So, back to my original summation; a delay that would otherwise only be enough to cause phasing issues isn't going to perceptibly alter the timing of the actual performance.
It occurs from interference of 2 or more mono analog or digital signals.They don't need to be replicates.
-
- Information
-
Who is online
Users browsing this forum: No registered users and 23 guests