Question on Chorus/Flanger internals
I just found that comb delay times stay the same when sample rate changes.
A sample delay line can only provide exact multiples of sample period.
How do they do this, what do you think?
Does anybody know if they use oversampling?
A sample delay line can only provide exact multiples of sample period.
How do they do this, what do you think?
Does anybody know if they use oversampling?
They probably change the delay buffer size to be the same multiple of the sample rate.
Thor doesn't over-sample, I really doubt one of the oldest, most CPU efficient devices would.
Thor doesn't over-sample, I really doubt one of the oldest, most CPU efficient devices would.
If you'd just move a delay in sample increments you'd get all kinds of noises so I guess they have to do it functional anyway. A row of samples is just a description of the wave function (sampled audio isn't stair steps), like f(x) = y where x is the sample #. When the sound is generated in a synth you're not bound to having integer values for x at all and if you actually have samples coming into a standalone device you can reconstruct a function from the samples (e.g. using bicubic interpolation).
Yes, that's exactly what I meant by oversampling.normen wrote:you can reconstruct a function from the samples (e.g. using bicubic interpolation).
The delay can be controlled by built-in LFO or CV input, and I guess that curve is continuous, so there can be delay values that don't correspond to any discrete delay knob setting.
Mhm, but for me oversampling is something where you get more samples out than you put in, then you have to downsample again which is way more problematic.. I guess thats just terminology though. CV values actually only get updated in 64 sample batches so those would have to be interpolated in some way too. For many things you can get away with simply crossfading between two sets of samples though, one with the initial values and one with the final values.orthodox wrote:Yes, that's exactly what I meant by oversampling. The delay can be controlled by built-in LFO or CV input, and I guess that curve is continuous, so there can be delay values that don't correspond to any discrete delay knob setting.
It's not important. Nobody will notice the effect change in 64 sample time. While I can make the CV change really slow and see that the comb drops on analyzer graph are moving continuously on frequency scale.normen wrote:CV values actually only get updated in 64 sample batches so those would have to be interpolated in some way too.
Crossfading would make one drop gradually disappear while the new drop shows up.
Even linear interpolation would work fine, and is very CPU efficient. That would allow you to shift the samples by fractional parts, and still have the same number of samples in vs. out. That would be the same as cross fading one sample to the next. So when you're at a half sample shift, the new samples would be created from proceeding and following samples added together and multiplied by 0.5.
But any curve could be used to estimate the values of the partial sample shifts.
But any curve could be used to estimate the values of the partial sample shifts.
It would not.ScuzzyEye wrote:Even linear interpolation would work fine
It would ring at other frequencies with a very large relative amplitude (1-cos(Ï€ f/fs)).
There are several accepted methods of resampling, they probably picked one.
We are talking about the red, half-rack device? It probably uses linear interpolation, because it requires almost no CPU power.
Have a look at it in send mode with no feedback. It rings by how much I would expect linear interpolation would.
Linear interpolation is fine for a device that's whole purpose is to create comb filtering, and is designed to run on an average CPU available 15 years ago.
Have a look at it in send mode with no feedback. It rings by how much I would expect linear interpolation would.
Linear interpolation is fine for a device that's whole purpose is to create comb filtering, and is designed to run on an average CPU available 15 years ago.
There are other ways of resampling using LP filters that also require almost no CPU power.ScuzzyEye wrote:We are talking about the red, half-rack device? It probably uses linear interpolation, because it requires almost no CPU power.
I did. Send mode:ScuzzyEye wrote:Have a look at it in send mode with no feedback. It rings by how much I would expect linear interpolation would.
Insert mode:
It's pretty clean, not what I would expect from linear interpolation.
- Attachments
-
- sweep-combed-96k.PNG (78.85 KiB) Viewed 1539 times
You're expecting a lot worse from linear than it usually delivers. A -48 dB side-band lobe is pretty typical, it doesn't always present the worse case scenario.orthodox wrote:It's pretty clean, not what I would expect from linear interpolation.
You're right. I just implemented that neighbor-wise linear interpolation and got the same picture:ScuzzyEye wrote:You're expecting a lot worse from linear than it usually delivers.
I incorrectly imagined random phase shifts on fractional delay when they were equal.
- Attachments
-
- sweep-comb-linint.PNG (74.42 KiB) Viewed 1533 times
My DSP background was originally in image processing. I used to know the characteristics of just about interpolation system known to man.
I'll admit that my first comment about it being linear was just a guess. A guess, but based on how I would have done it 15 years ago, if I wanted a real-time system for computing the values between samples. Though, when I had a look at what it did to a 1 kHz sine wave, I was pretty sure my guess was right. I'm happy you went to the effort to actually test it.
Now, if there was only use for Akima's method in audio processing, my years of working with pictures could be put to use.
I'll admit that my first comment about it being linear was just a guess. A guess, but based on how I would have done it 15 years ago, if I wanted a real-time system for computing the values between samples. Though, when I had a look at what it did to a 1 kHz sine wave, I was pretty sure my guess was right. I'm happy you went to the effort to actually test it.
Now, if there was only use for Akima's method in audio processing, my years of working with pictures could be put to use.
When you work in the frequency domain (with a FFT decoded wave that is) the processing is basically the same as image processing, sounds like good news for youScuzzyEye wrote:My DSP background was originally in image processing. I used to know the characteristics of just about interpolation system known to man.
I'll admit that my first comment about it being linear was just a guess. A guess, but based on how I would have done it 15 years ago, if I wanted a real-time system for computing the values between samples. Though, when I had a look at what it did to a 1 kHz sine wave, I was pretty sure my guess was right. I'm happy you went to the effort to actually test it.
Now, if there was only use for Akima's method in audio processing, my years of working with pictures could be put to use.
-
- Information
-
Who is online
Users browsing this forum: No registered users and 10 guests