Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Optimize conversion between list of integer coefficients and its long integer representation

I'm trying to optimize a polynomial implementation of mine. In particular I'm dealing with polynomials with coefficients modulo n(might be >2^64) and modulo a polynomial in the form x^r - 1(r is < 2^64). At the moment I represent the coefficient as a list of integers(*) and I've implemented all the basic operations in the most straightforward way.

I'd like the exponentiation and multiplication to be as fast as possible, and to obtain this I've already tried different approaches. My current approach is to convert the lists of coefficients into huge integers multiply the integers and unpack back the coefficients.

The problem is that packing and unpacking takes a lot of time.

So, is there a way of improving my "pack/unpack" functions?

def _coefs_to_long(coefs, window):
    '''Given a sequence of coefficients *coefs* and the *window* size return a
    long-integer representation of these coefficients.
    '''

    res = 0
    adder = 0
    for k in coefs:
        res += k << adder
        adder += window
    return res
    #for k in reversed(coefs): res = (res << window) + k is slower


def _long_to_coefs(long_repr, window, n):
    '''Given a long-integer representing coefficients of size *window*, return
    the list of coefficients modulo *n*.
    '''

    mask = 2**window - 1
    coefs = [0] * (long_repr.bit_length() // window + 1)
    for i in xrange(len(coefs)):
        coefs[i] = (long_repr & mask) % n
        long_repr >>= window

    # assure that the returned list is never empty, and hasn't got an extra 0.
    if not coefs:
        coefs.append(0)
    elif not coefs[-1] and len(coefs) > 1:
        coefs.pop()

    return coefs

Note that I do not choose n, it is an input from the user, and my program wants to prove its primality(using the AKS test), so I can't factorize it.


(*) I've tried several approaches:

  1. Using a numpy array instead of a list and multiply using numpy.convolve. It's fast for n < 2^64 but terribly slow for n > 2^64[also I'd like to avoid using external libraries]
  2. Using scipy.fftconvolve. Doesn't work at all for n > 2^64.
  3. Represent the coefficients as integers from the start(without converting them every time). The problem is that I don't know of an easy way to do the mod x^r -1 operation without converting the integer to a list of coefficients(which defeats the reason of using this representation).
like image 997
Bakuriu Avatar asked Sep 12 '12 21:09

Bakuriu


4 Answers

Unless you're doing this to learn, why reinvent the wheel? A different approach would be to write a python wrapper to some other polynomial library or program, if such a wrapper doesn't exist already.

Try PARI/GP. It's surprisingly fast. I recently wrote a custom C code, which took me two days to write and turned out to only be 3 times faster than a two-line PARI/GP script. I would bet that a python code calling PARI would end out to be faster than whatever you implement in python alone. There's even a module for calling PARI from python: https://code.google.com/p/pari-python/

like image 82
Douglas B. Staple Avatar answered Nov 06 '22 15:11

Douglas B. Staple


You could try using residual number systems to represent the coefficients of your polynomial. You would also split up your coefficients into smaller integers as you do now, but you don't need to convert them back to a huge integer to do multiplications or other operations. This should not require much reprogramming effort.

The basic principle of residual number systems is the unique representation of numbers using modular arithmetic. The whole theory surrounding RNS allows you to do your operations on the small coefficients.

edit: a quick example:

Suppose you represent your large coefficients in an RNS with moduli 11 and 13. Your coefficients would all consist of 2 small integers (<11 and <13) that can be combined to the original (large) integer.

Suppose your polynomial is originally 33x²+18x+44. In RNS, the coefficients would respectively be (33 mod 11, 33 mod 13),(18 mod 11,18 mod 13) and (44 mod 11, 44 mod 13)=>(0,7),(7,5) and (0,5).

Multiplying your polynomial with a constant can then be done by multiplying each small coefficient with that constant and do modulo on it.

Say you multiply by 3, your coefficients will become (0,21 mod 13)=(0,8), (21 mod 11,15 mod 13)=(10,2) and (0 mod 11,15 mod 13)=(0,2). There has been no need to convert the coefficients back to their large integer.

To check if our multiplication has worked, we can convert the new coefficients back to their large representation. This requires 'solving' each set of coefficients as a modular system. For the first coefficients (0,8) we would need to solve x mod 11=0 and x mod 13 = 8. This should not be too hard to implement. In this example you can see that x=99 is a valid solution (modulo 13*11)

We then get 99x²+54x+132, the correct multiplied polynomial. Multiplying with other polynomials is similar (but require you to multiply the coefficients with each other in a pairwise manner). The same goes for addition.

For your use case, you could choose your n based on the number of coefficients you want or the their size.

like image 27
Origin Avatar answered Nov 06 '22 13:11

Origin


How about directly implementing arbitrary precision integer polynomials as a list of numpy arrays?

Let me explain: say your polynomial is Σp Ap Xp. If the large integer Ap can be represented as Ap = Σk Ap,k 264 k then the kth numpy array will contain the 64-bit int Ap,k at position p.

You could choose dense or sparse arrays according to the structure of your problem.

Implementing addition and scalar operations are just a matter of vectorizing the bignum implementation of the same operations.

Multiplication could be handled as follows: AB = Σp,k,p',k' Ap,kBp',k' 264(k+k') Xp+p'. So a naive implementation with dense arrays could lead to log64(n)2 calls to numpy.convole or scipy.fftconvolve.

The modulo operation should be easy to implement since it is a linear function of the left hand term and the right hand term has small coefficients.

EDIT here are some more explanations

Instead of representing the polynomial as a list of arbitrary precision numbers (themselves represented as lists of 64-bit "digits"), transpose the representation so that:

  • your polynomial is represented as a list of arrays
  • the kth array contains the kth "digit" of each coefficient

If only a few of your coefficients are very large then the arrays will have mostly 0s in them so it may be worthwhile using sparse arrays.

Call Ap,k the kth digit of the pth coefficient.

Note the analogy with large integer representations: where a large integer would be represented as

x = Σk xk 264 k

your polynomial A is represented in the same way as

A = Σk Ak 264 k Ak = Σk Ap,k Xp

To implement addition, you simply pretend your list of arrays is a list of simple digits and implement addition as usual for large integers (watch out to replace if then conditionals by numpy.where).

To implement multiplication, you will find you need to make log64(n)2 polynomial multiplications.

To implement the modulo operation on the coefficients, is again a simple case of translating the modulo operation on a large integer.

To take the modulo by a polynomial with small coefficients, use the linearity of this operation:

A mod (Xr - 1) = (Σk Ak 264 k) mod (Xr - 1)

= Σk 264 k (Ak mod (Xr - 1))

like image 30
spam_eggs Avatar answered Nov 06 '22 15:11

spam_eggs


I found a way to optimize the conversions, even though I still hope that someone could help me improve them even more, and hopefully find some other clever idea.

Basically what's wrong with those functions is that they have some kind of quadratic memory allocation behaviour, when packing the integer, or when unpacking it. (See this post of Guido van Rossum for an other example of this kind of behaviour).

After I realized this I've decided to give a try with the Divide et Impera principle, and I've obtained some results. I simply divide the array in two parts, convert them separately and eventually join the results(later I'll try to use an iterative version similar to the f5 in Rossum's post[edit: it doesn't seem to be much faster]).

The modified functions:

def _coefs_to_long(coefs, window):
    """Given a sequence of coefficients *coefs* and the *window* size return a
    long-integer representation of these coefficients.
    """

    length = len(coefs)
    if length < 100:
        res = 0
        adder = 0
        for k in coefs:
            res += k << adder
            adder += window
        return res
    else:
        half_index = length // 2
        big_window = window * half_index
        low = _coefs_to_long(coefs[:half_index], window)
        high = _coefs_to_long(coefs[half_index:], window)
        return low + (high << big_window)


def _long_to_coefs(long_repr, window, n):
    """Given a long-integer representing coefficients of size *window*, return
    the list of coefficients modulo *n*.
    """

    win_length = long_repr.bit_length() // window
    if win_length < 256:
        mask = 2**window - 1
        coefs = [0] * (long_repr.bit_length() // window + 1)
        for i in xrange(len(coefs)):
            coefs[i] = (long_repr & mask) % n
            long_repr >>= window

        # assure that the returned list is never empty, and hasn't got an extra 0.
        if not coefs:
            coefs.append(0)
        elif not coefs[-1] and len(coefs) > 1:
            coefs.pop()

        return coefs
    else:
        half_len = win_length // 2
        low = long_repr & (((2**window) ** half_len) - 1)
        high = long_repr >> (window * half_len)
        return _long_to_coefs(low, window, n) + _long_to_coefs(high, window, n) 

And the results:

>>> import timeit
>>> def coefs_to_long2(coefs, window):
...     if len(coefs) < 100:
...         return coefs_to_long(coefs, window)
...     else:
...         half_index = len(coefs) // 2
...         big_window = window * half_index
...         least = coefs_to_long2(coefs[:half_index], window) 
...         up = coefs_to_long2(coefs[half_index:], window)
...         return least + (up << big_window)
... 
>>> coefs = [1, 2, 3, 1024, 256] * 567
>>> # original function
>>> timeit.timeit('coefs_to_long(coefs, 11)', 'from __main__ import coefs_to_long, coefs',
...               number=1000)/1000
0.003283214092254639
>>> timeit.timeit('coefs_to_long2(coefs, 11)', 'from __main__ import coefs_to_long2, coefs',
...               number=1000)/1000
0.0007998988628387451
>>> 0.003283214092254639 / _
4.104536516782767
>>> coefs = [2**64, 2**31, 10, 107] * 567
>>> timeit.timeit('coefs_to_long(coefs, 66)', 'from __main__ import coefs_to_long, coefs',...               number=1000)/1000

0.009775240898132325
>>> 
>>> timeit.timeit('coefs_to_long2(coefs, 66)', 'from __main__ import coefs_to_long2, coefs',
...               number=1000)/1000
0.0012255229949951173
>>> 
>>> 0.009775240898132325 / _
7.97638309362875

As you can see this version gives quite a speed up to the conversion, from 4 to 8 times faster(and bigger the input, bigger is the speed up). A similar result is obtained with the second function:

>>> import timeit
>>> def long_to_coefs2(long_repr, window, n):
...     win_length = long_repr.bit_length() // window
...     if win_length < 256:
...         return long_to_coefs(long_repr, window, n)
...     else:
...         half_len = win_length // 2
...         least = long_repr & (((2**window) ** half_len) - 1)
...         up = long_repr >> (window * half_len)
...         return long_to_coefs2(least, window, n) + long_to_coefs2(up, window, n)
... 
>>> long_repr = coefs_to_long([1,2,3,1024,512, 0, 3] * 456, 13)
>>> # original function
>>> timeit.timeit('long_to_coefs(long_repr, 13, 1025)', 'from __main__ import long_to_coefs, long_repr', number=1000)/1000
0.005114212036132813
>>> timeit.timeit('long_to_coefs2(long_repr, 13, 1025)', 'from __main__ import long_to_coefs2, long_repr', number=1000)/1000
0.001701267957687378
>>> 0.005114212036132813 / _
3.006117885794327
>>> long_repr = coefs_to_long([1,2**33,3**17,1024,512, 0, 3] * 456, 40)
>>> timeit.timeit('long_to_coefs(long_repr, 13, 1025)', 'from __main__ import long_to_coefs, long_repr', number=1000)/1000
0.04037192392349243
>>> timeit.timeit('long_to_coefs2(long_repr, 13, 1025)', 'from __main__ import long_to_coefs2, long_repr', number=1000)/1000
0.005722791910171509
>>> 0.04037192392349243 / _
7.0545853417694

I've tried to avoid more memory reallocation in the first function passing around the start and end indexes and avoiding slicing, but it turns out that this slows the function down quite much for small inputs and it's a just a bit slower for real-case inputs. Maybe I could try to mix them, even though I don't think I'll obtain much better results.


I've edited my question in the last period therefore some people gave me some advice with a different aim then what I required recently. I think it's important to clarify a bit the results pointed out by different sources in the comments and the answers, so that they can be useful for other people looking to implement fast polynomials and or AKS test.

  • As J.F. Sebastian pointed out the AKS algorithm receive many improvements, and so trying to implement an old version of the algorithm will always result in a very slow program. This does not exclude the fact that if you already have a good implementation of AKS you can speed it up improving the polynomials.
  • If you are interested in coefficients modulo a small n(read: word-size number) and you don't mind external dependencies,then go for numpy and use numpy.convolve or scipy.fftconvolve for the multiplication. It will be much faster than anything you can write. Unfortunately if n is not word size you can't use scipy.fftconvolve at all, and also numpy.convolve becomes slow as hell.
  • If you don't have to do modulo operations(on the coefficients and on the polynomial), then probably using ZBDDs is a good idea(as pointed out by harold), even though I do not promise spectacular results[even though I think it's really interesting and you ought to read Minato's paper].
  • If you don't have to do modulo operations on the coefficients then probably using an RNS representation, as stated by Origin, is a good idea. Then you can combine multiple numpy arrays to operate efficiently.
  • If you want a pure-python implementation of polynomials with coefficient modulo a big n, then my solution seems to be the fastest. Even though I did not try to implement fft multiplication between arrays of coefficients in python(which may be faster).
like image 1
Bakuriu Avatar answered Nov 06 '22 15:11

Bakuriu