Hi All, I thought I would kick start this new forum by asking developers how much they rely on the built in SDK functions for FFT and IFFT (JBox_FFTRealForward and JBox_FFTRealInverse) versus self baked or third party library ones?
So far I have used the inbuilt ones to do some mucking around to develop a simple Spectrum Analyser tool.
Murf.
DSP and the built in FFT functions of the Reason SDK
Thats an interesting questions. I didnt checked the algorithms and compared them, but i know there are plenty highly optimized algorithms out there. But for sure it must be verified today, if they bring that performance boost with modern CPUs today, as they did 20 years in the past.
When i was experimenting with DSP code (very long time ago) i read about skipping several calculations, reducing the number of multiplications (make 4 adds instead of *4), using shifting (shift instead of x*n^2), pre-calculated tables (for sin and cos), fixed-point-numbers (reduce precision and use integers), weird and tricky shifting in the real numbers (sorry, i quickly pushed this out of my brain), CPU commands and tricks, asynchrone calculation or precalculation/buffering, and many more (i forgot)...
Today you need to be carefull with any optimization, because it may trick the CPU and you also need to consider pipelining, caching, lookahead and many more stuff i probably not know. So in the end you need to measure it with special tools, which count the CPU cycles and not the time of the execution to get an exact knowledge which is the best and fastest way to calculate it, maybe with a drawback of accuracy too.
When i was experimenting with DSP code (very long time ago) i read about skipping several calculations, reducing the number of multiplications (make 4 adds instead of *4), using shifting (shift instead of x*n^2), pre-calculated tables (for sin and cos), fixed-point-numbers (reduce precision and use integers), weird and tricky shifting in the real numbers (sorry, i quickly pushed this out of my brain), CPU commands and tricks, asynchrone calculation or precalculation/buffering, and many more (i forgot)...
Today you need to be carefull with any optimization, because it may trick the CPU and you also need to consider pipelining, caching, lookahead and many more stuff i probably not know. So in the end you need to measure it with special tools, which count the CPU cycles and not the time of the execution to get an exact knowledge which is the best and fastest way to calculate it, maybe with a drawback of accuracy too.
Reason12, Win10
-
- RE Developer
- Posts: 128
- Joined: 14 Nov 2018
I've used them a bunch and I think they are fine. I'm sure they are well-optimized and use lookup-tables. There could be optimized versions for special cases (if one needs the optimization). Like a split radix FFT for lengths 4n or for zero-padding / zero-insertion (there has been a post on reddit.com/r/DSP about potential optimizations for that). Recently I've also discovered papers about modified FFTs that can do FFT convolution without the need for zero-padding. I don't know the details but maybe that could be something worth looking into for repeated long convolution.
I think it would be nice to have some features built-in, like different windowing options but the SDK is very barebones in terms of DSP functions anyway.
I think it would be nice to have some features built-in, like different windowing options but the SDK is very barebones in terms of DSP functions anyway.
Very positive experience with the built in FFT when you align your memory correctly and that sort of thing.
Initially I considered bringing in a third party implementation when I made Optic but ultimately never saw the need, even with performing multiple FFTs on multiple audio sources frequently it was never the slowest part of my code with extensive profiling.
I'd say don't reinvent the wheel until you find an explicit reason to do so, I imagine not only is it well optimized, but its specifically well optimized for the Rack Extension/Reason environment. Other libraries might be faster or fancier, but that doesn't mean they'll work as well in Reason.
Initially I considered bringing in a third party implementation when I made Optic but ultimately never saw the need, even with performing multiple FFTs on multiple audio sources frequently it was never the slowest part of my code with extensive profiling.
I'd say don't reinvent the wheel until you find an explicit reason to do so, I imagine not only is it well optimized, but its specifically well optimized for the Rack Extension/Reason environment. Other libraries might be faster or fancier, but that doesn't mean they'll work as well in Reason.
Static Cling - Rack Extension Developer of Tome, Index, Optic, Chord Detector, Delta, and AutoLatch.
www.StaticCling.io
info@StaticCling.io
www.StaticCling.io
info@StaticCling.io
Someone (can't remember who) was using some public library (maybe FFFTW).
Thing is, many FFT libraries (I'm guessing most) use assembly code and other features that aren't allowed in the RE SDK.
Might have changed a bit since then. This was several years ago I last looked at them.
Thing is, many FFT libraries (I'm guessing most) use assembly code and other features that aren't allowed in the RE SDK.
Might have changed a bit since then. This was several years ago I last looked at them.
- meowsqueak
- RE Developer
- Posts: 50
- Joined: 21 Jan 2015
SDK compatibility aside, benchmark everything. If the inbuilt functions do the job and perform well enough (I'd expect them to), then you might as well use them. I did my own benchmarking of these years ago and didn't see any performance issues, but when I needed to modify the FFT to support other transforms I had to abandon them.
-
- Information
-
Who is online
Users browsing this forum: No registered users and 1 guest