Fast Fourier Transform (FFT)

To multiply two polynomials in O(nlogn)

In this article, I am going to describe the multiplication of two polynomials in O(nlogn)
time. This uses the Fast Fourier Transform Algorithm(FFT).

The book "Introduction to Algorithms" by Cormen,Leiserson and Rivest describes the multiplication of two polynomials in O(nlogn) time. Here, I shall describe a variant of the method they have used.

Firstly, a brush-up on some basics:

1. A(x)= a_0 + a_1(x) + a_2(x^2)+.....+ a_(n-1) (x^(n-1)) is a polynomial of degree n-1.

2. Multiplication of polynomials A(x) and B(x) ie C(x)=A(x)B(x) takes O(n^2) time.

3. Finding A(p) is by substituting p in place of x in the polynomial:
        A(p) = a_0 + a_1(p^2)+.....+a_(n-1) (p^(n-1)).
        This by Horner's Rule is:
        A(p) = a_0 + p(a_1 +p(a_2+.....+p(a_(n-1))...) .This takes

4. There are two types of representations for polynomials:

        Coefficient form: (a_0,a_1,a_2,.....,a_(n-1)).
        Point-Value form: {(x_0,y_0),(x_1,y_1),...(x_n-1,
        y_n-1)} ,where y(x_i) = A(x_i).

5. Conversion from coefficient to point-value forms takes O(n^2) even on using Horner's Rule because the value of A(x) has to be found at n points.

6. The complex nth root of unity is a complex number w* such that w*^n=1.

7. The n nth roots of unity are: w^0,w^1,w^2,....,w^n-1.
    w=e^(2*pi*i/n) is called the Principal nth Root of unity and all the n roots are powers of this w.

8. Periodicity: w^(k+n)=w^k.

With is background we can start off.

For multiplying two polynomials A(x) and B(x) given in co-efficient forms, we first convert them into point-value forms. Multiplication in point-value form takes O(n).
Then we covert back to the co-efficient form.
However, converting from co-efficient to point-value takes O(n^2) as said in 5.But,on choosing the points x_0,x_1,.....x_(n-1) carefully, we can achieve this in O(nlogn).

These points which we are going to choose are going to be the n nth roots of unity. This is what is Discrete Fourier Transform(DFT).

Now, A(x)=a_0 + a_1(x) + a_2(x^2) + .....+a_(n-1)(x^(n-1)).
This can be split-up into two polynomials: the first one containing the first n/2 terms and the second one containing the second n/2 terms. The book "Introduction to Algorithms " describes the split-up as even and odd terms.Here, we can arrive at the same result, but in just a slightly different way.

So we take A_0(x) = a_0 + a_1(x) + a_2(x^2)+......+ a_(n/2-1)(x^(n/2-1))

and A_1(x) = a_(n/2) + a_(n/2+1)(x) + a_(n/2+2)(x^2)+......+a_(n-1)(x^(n/2-1)).

Hence, we can write A(x) as

           A(x)=A_0(x) + x^(n/2)A_1(x)-----------> (1).

Now, on substituting w in place of x , we see that x^(n/2) in the equation (1) becomes
 (x^(n/2)) = (w^(n/2)) = +1,-1 ie the two roots of unity.

We can deciminate the samples into two samples of even and odd terms.
The whole n-point DFT has now become two n/2-point DFTs.

These two n/2 point DFTs can further be divided into two n/4 point DFTs and so on. Hence, the complexity of the resultant algorithm is O(nlogn).

In the approach described in "Introduction to Algorithms", the samples are divided into even and odd samples to obtain the first and second half samples, exactly opposite to what we have obtained presently.

The recrursive algorithm can be given as:

              n <- length(a)
              if(n=1) return a

               w <- e^(2*pi*i/n)
               a[0] <- (a_0,a_1,......,a_(n/2-1))
               a[1] <- (a_n/2,a_(n/2 + 1),........,a_(n-1))

               y[0] <- Recursive_FFT(a[o])
               y[1] <- Recursive_FFT(a[1])

               for k <- 0 to n/2 -1
                         do begin
                             y_2k <- y_k[0] + y_k[1]
                             y_2k+1 <- y_k[0] - y_k[1]
               return y

The recurrent equation is: T(n) = 2T(n/2) + O(n), if we take O(n) for each recursive call.
                           Hence,T(n) = O(nlogn).

Once we get the point-value form, we can perform multiplication in O(n) time.

The conversion of point-value to co-efficient form can be done similarly in O(nlogn). This
is called interpolation.

The whole process of multiplication thus takes O(nlogn).


Share this article!

Follow us!

Find more helpful articles: