Number of Multiplications in the FFT Algorithm

In summary, the Fast Fourier Transform (FFT) algorithm significantly reduces the number of multiplications required to compute the discrete Fourier transform (DFT) from O(N^2) to O(N log N). This efficiency is achieved by recursively breaking down the DFT into smaller DFTs, allowing for a more manageable computational load, especially for large datasets. The reduction in multiplications is a key factor that makes FFT a widely used algorithm in various applications, including signal processing and data analysis.
  • #1
Albert01
14
0
Hello everyone,

maybe some of you know the formula for the number of multiplications in the FFT algorithm. This is again given as ##N/2 \cdot log(N)##. Why is that so? Can you really "prove" this?

I can only deduce this from what I know, because we have ##log(N)## levels and ##N/2## butterflies on each level. That makes ##N/2 \cdot log(N)##. Can you also prove this formally, e.g. via induction?

Thank you for your interest!!!
 
Technology news on Phys.org
  • #2
Basically, yes.
Each "butterfly" is a complex multiply - so four "real" multiplies.
And where you write "log", you mean log base 2.
Also, this works for FFTs where the number of elements is a power of 2. The Tukey-Cooley algorithm introduced in 1965 also specifies the handling of other N's.

The proof is fairly simple.
Multiplications occur in the butterfly modules and in the preparation of the ##q_k## factors - but those ##q_k## factors can be prepared at "compile time". Only the butterfly multiplies are executed at run time.
When N=1, the number of multiplies is zero.
Each time you double ##N=2*(N_{-1})##, you perform two "N/2" FFTs and then N/2 butterflies.

So, as you noted, you are doing N/2 butterflies at each stage and there are ##log_2(N)## stages.
 
Last edited:
  • Like
Likes Albert01 and berkeman
  • #3
Albert01 said:
Can you really "prove" this?
The traditional complex multiply, employs four internal multiplications.
I seem to remember that, where multiply is too slow, or expensive, that can be traded down to three.
 
  • Like
Likes .Scott
  • #4
On a historical aside, when the FFT was first published in 1965 by Cooley and Tukey, it was commonly referred to as the Tukey-Cooley DFT algorithm. Later it became known as the FFT or the Cooley-Tukey algorithm. In the 1965 paper, the authors names appear in the order Cooley, Tukey.
However, according to this report (and others), the first person to devise the basic FFT strategy was Carl Friedrich Gauss, who recorded it in an unpublished manuscript in about 1805.

In contrast, Jean-Baptiste Joseph Fourier published his "Treatise on the propagation of heat in solid bodies", which introduced his assertion that any function could be decomposed into a frequency spectrum, in 1807.
So the FFT existed in an unpublished form 2 years before the "Fourier Transform".
 
  • #5
Baluncore said:
The traditional complex multiply, employs four internal multiplications.
I seem to remember that, where multiply is too slow, or expensive, that can be traded down to three.
There are FFT optimizations that do not change the basic Tukey-Cooley strategy. But the FFT algorithm as published in 1965 specifies a full complex multiply in each butterfly. If you "break the butterflies" and expand out the algorithm for N=4, I believe you are right about reducing the multiplications.
 
  • #6
.Scott said:
If you "break the butterflies" and expand out the algorithm for N=4, I believe you are right about reducing the multiplications.
Not that way.
I refer to the four primitive multiplies within the one complex multiply.
"This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two."
https://en.wikipedia.org/wiki/Multiplication_algorithm#Complex_number_multiplication
 
  • Like
Likes .Scott
  • #7
Baluncore said:
I seem to remember that, where multiply is too slow, or expensive, that can be traded down to three.
Only by trading it for three more adds/subtracts. On what modern processor that you might use for computing FFTs is that going to be a good trade?
 
  • #8
pbuk said:
On what modern processor that you might use for computing FFTs is that going to be a good trade?
Obviously, postmodern processors, with parallel multipliers, that have been optimised for pipelined signal processing and FFTs, will not be advantaged, unless multiple precision is required.

But this thread is about the proof of the number of "multiplies" required for an FFT. That presumed there was no tactical alternative, that uses less multiplies than the plain vanilla FFT.

There are very many ways to factor the FFT algorithm. If the OP had referred to the number of "arithmetic operations", rather than "multiplies", then my answer would be different.
 
  • Like
Likes pbuk
  • #9
Baluncore said:
unless multiple precision is required.
Good point.

Baluncore said:
But this thread is about the proof of the number of "multiplies" required for an FFT.
Good point again: I should have read from the top!
 
  • #10
I initially determined the number of Butterflies constructively, using the number of levels and the knowledge that ##N/2## butterflies are required in each level.

.Scott said:
The proof is fairly simple.
Multiplications occur in the butterfly modules and in the preparation of the qk factors - but those qk factors can be prepared at "compile time". Only the butterfly multiplies are executed at run time.
When N=1, the number of multiplies is zero.
Each time you double N=2∗(N−1), you perform two "N/2" FFTs and then N/2 butterflies.

This is the inductive proof? Not more?
 
Last edited:
  • #11
A suggestion for proof via induction from me. Please correct me if you find any mistakes! A confirmation would be nice!

A butterfly (BF) in an FFT-diagram looks like:
1719842187499.png

Notice, the number of multiplications corresponds to the number of butterfly operations.

Main part:

Statement: Prove by induction that the number of butterfly operations in an FFT circuit is ##\frac{n}{2}\cdot \log(n)##, where ##n = 2^x## for ##x > 0##.

Proof:
Base case
: For ##n = 2##, the number of butterfly operations is ##(2/2)\cdot \log(2) = 1##, which is true. See the picture above it shows the base case.

Inductive step: Assume the statement is true for ##n = 2^k##, i.e., the number of butterfly operations is ##\frac{2^k}{2} \log 2^k = 2^{k-1} \cdot k##. We need to show it holds for ##n = 2^{k+1}##.

For ##n = 2^{k+1}##:
- The FFT algorithm splits the problem into two FFTs of size ##2^k## and then combines the results with ##2^k## butterfly operations.
1719849004539.png

- By the inductive hypothesis, each FFT of size ##2^k## requires ##2^{k-1} \cdot k## butterfly operations.
Thus, the total number of butterfly operations for ##n = 2^{k+1}## is:
$$2 \left(2^{k-1} \cdot k\right) + 2^k = 2^k \cdot k + 2^k = 2^k (k + 1)$$
Now, we need to verify that this matches ##\frac{2^{k+1}}{2} \log 2^{k+1}##:
$$\frac{2^{k+1}}{2} \log 2^{k+1} = 2^k \log 2^{k+1} = 2^k (k + 1)$$
Thus, the inductive step holds.

By mathematical induction, the number of butterfly operations in an FFT circuit is ##\frac{n}{2} \log n## for ##n = 2^x## where ##x > 0##.
 
Last edited:

Similar threads

Replies
1
Views
2K
Replies
4
Views
1K
Replies
1
Views
1K
Replies
11
Views
2K
Replies
8
Views
2K
Replies
7
Views
4K
Replies
1
Views
1K
Back
Top