Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Best algorithm for series expansion of Rational function

I need to code function in C++ which efficiently finds coefficients of Taylor Series of given rational function (P(x) / Q(x)).

Function parameters will be power of polynomials (equal in nominator and denominator), two arrays with coefficients of polynomials and number of terms in expansion.

My idea was following. Consider identity

P(x) / Q(x) = R(x) + ...

Where R(x) is a polynomial with number of terms equal to number of coefficients I need to find. Then I can multiply both sides with Q(x) and get

P(x) = R(x) * Q(x)

R(x) * Q(x) - P(x) = 0

Therefore, all coefficients should be zero. This is system of equations which have O(n^3) algorithm to solve. O(n^3) is not that fast as I wanted.

Is there any faster algorithm?

I know that coefficients of series are satisfying linear recurrence relation. This makes me think that O(n) algorithm is possible.

like image 411
Somnium Avatar asked Apr 15 '14 19:04

Somnium


3 Answers

The algorithm that I'm about to describe is justified mathematically by formal power series. Every function with a Taylor series has a formal power series. The converse is not true, but if we do arithmetic on functions with Taylor series and get a function with a Taylor series, then we can do the same arithmetic with formal power series and get the same answer.

The long division algorithm for formal power series is like the long division algorithm that you may have learned in school. I'll demonstrate it on the example (1 + 2 x)/(1 - x - x^2), which has coefficients equal to the Lucas numbers.

The denominator must have a nonzero constant term. We start by writing the numerator, which is the first residual.

             --------
1 - x - x^2 ) 1 + 2 x

[ We divide the residual's lowest-order term (1) by the denominator's constant term (1) and put the quotient up top.

              1
             --------
1 - x - x^2 ) 1 + 2 x

Now we multiply 1 - x - x^2 by 1 and subtract it from the current residual.

              1
             --------
1 - x - x^2 ) 1 + 2 x
              1 -   x - x^2
              -------------
                  3 x + x^2

Do it again.

              1 + 3 x
             --------
1 - x - x^2 ) 1 + 2 x
              1 -   x -   x^2
              ---------------
                  3 x +   x^2
                  3 x - 3 x^2 - 3 x^3
                  -------------------
                        4 x^2 + 3 x^3

And again.

              1 + 3 x + 4 x^2
             ----------------
1 - x - x^2 ) 1 + 2 x
              1 -   x -   x^2
              ---------------
                  3 x +   x^2
                  3 x - 3 x^2 - 3 x^3
                  -------------------
                        4 x^2 + 3 x^3
                        4 x^2 - 4 x^3 - 4 x^4
                        ---------------------
                                7 x^3 + 4 x^4

And again.

              1 + 3 x + 4 x^2 + 7 x^3
             ------------------------
1 - x - x^2 ) 1 + 2 x
              1 -   x -   x^2
              ---------------
                  3 x +   x^2
                  3 x - 3 x^2 - 3 x^3
                  -------------------
                        4 x^2 + 3 x^3
                        4 x^2 - 4 x^3 - 4 x^4
                        ---------------------
                                7 x^3 + 4 x^4
                                7 x^3 - 7 x^4 - 7 x^4
                                ---------------------
                                       11 x^4 + 7 x^5

The individual divisions were kind of boring because I used a divisor with a leading 1, but if I had used, say, 2 - 2 x - 2 x^2, then all of the coefficients in the quotient would be divided by 2.

like image 53
David Eisenstat Avatar answered Oct 09 '22 10:10

David Eisenstat


This can be done in O(n log n) time for arbitrary P and Q of degree n. More precisely this can be done in M(n), where M(n) is the complexity of polynomial multiplication which itself can be done in O(n log n).

First of, the first n terms of a series expansion can be viewed simply as a polynomial of degree n-1.

Assume you are interested in the first n terms of the series expansion of P(x)/Q(x). There exists an algorithm that will compute the inverse of Q in M(n) time as defined above.

Inverse T(x) of Q(x) satisfies T(x) * Q(x) = 1 + O(x^N). I.e. T(x) * Q(x) is precisely 1 plus some error term whose coeficients all come after the first n terms we are interested in, so we can just drop them.

Now P(x) / Q(x) is simply P(x) * T(x) which is just another polynomial multiplication.

You can find an implementation that computes the aforementioned inverse in my open source library Altruct. See the series.h file. Assuming you already have a method that computes the product of two polyinomials, the code that calculates the inverse is about 10 lines long (a variant of divide-and-conquer).

The actual algorithm is as follows: Assume Q(x) = 1 + a1*x + a2*x^2 + .... If a0 is not 1, you can simply divide Q(x) and later its inverse T(x) with a0. Asume that at each step you have L terms of the inverse so that Q(x) * T_L(x) = 1 + x^L * E_L(x) for some error E_L(x). Initially T_1(X) = 1. If you plug this in in the above you'll get Q(x) * T_1(x) = Q(x) = 1 + x^1 * E_1(x) for some E_1(x) which means this holds for L=1. Let's now double L at each step. You can get E_L(x) from the previous step as E_L(x) = (Q(x) * T_L(x) - 1) / x^L, or implementation-wise, just drop the first L coefficients of the product. You can then compute T_2L(x) from the previous step as T_2L(x) = T_L(x) - x^L * E_L(x) * T_L(x). The error will be E_2L(x) = - E_L(x)^2. Let's now check that the induction step holds.

Q(x) * T_2L(x)
= Q(x) * (T_L(x) - x^L * E_L(x) * T_L(x))
= Q(x) * T_L(x) * (1 - x^L * E_L(x))
= (1 + x^L * E_L(x)) * (1 - x^L * E_L(x))
= 1^2 - (x^L * E_L(x))^2
= 1 + x^2L * E_2L(x)

Q.E.D.

I am pretty sure it is not possible to compute polynomial division more efficient than multiplication, and as you can see in the following table, this algorithm is only 3 times slower than a single multiplication:

   n      mul        inv      factor
10^4       24 ms      80 ms    3,33x
10^5      318 ms     950 ms    2,99x
10^6    4.162 ms  12.258 ms    2,95x
10^7  101.119 ms 294.894 ms    2,92x
like image 45
plamenko Avatar answered Oct 09 '22 09:10

plamenko


If you look closely at the system you'd get with your plan, you can see that it is already diagonal, and doesn't require O(n^3) to be solved. It simply degenerates into a linear recursion (P[], Q[] and R[] being the coefficients of the corresponding polynomials):

R[0] = P[0]/Q[0]
R[n] = (P[n] - sum{0..n-1}(R[i] * Q[n-i]))/Q[0]

Since Q is a polynomial, the sum has no more than deg(Q) terms (thus taking constant time to calculate), making the overall complexity asymptotically linear. You may also look at the matrix representation of recursion for a (possibly) better asymptotic.

like image 26
user58697 Avatar answered Oct 09 '22 10:10

user58697