Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Efficient way to compute the Vandermonde matrix

I'm calculating Vandermonde matrix for a fairly large 1D array. The natural and clean way to do this is using np.vander(). However, I found that this is approx. 2.5x slower than a list comprehension based approach.

In [43]: x = np.arange(5000)
In [44]: N = 4

In [45]: %timeit np.vander(x, N, increasing=True)
155 µs ± 205 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

# one of the listed approaches from the documentation
In [46]: %timeit np.flip(np.column_stack([x**(N-1-i) for i in range(N)]), axis=1)
65.3 µs ± 235 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [47]: np.all(np.vander(x, N, increasing=True) == np.flip(np.column_stack([x**(N-1-i) for i in range(N)]), axis=1))
Out[47]: True

I'm trying to understand where the bottleneck is and the reason why does the implementation of native np.vander() is ~ 2.5x slower.

Efficiency matters for my implementation. So, even faster alternatives are also welcome!

like image 343
kmario23 Avatar asked Jan 14 '18 01:01

kmario23


People also ask

How is Vandermonde matrix calculated?

The matrix is described by the formula A ( i , j ) = v ( i ) ( N − j ) such that its columns are powers of the vector v .

What is the Vandermonde matrix used for?

The Vandermonde determinant is used in the representation theory of the symmetric group. belong to a finite field, then the Vandermonde determinant is also called a Moore determinant and has specific properties that are used, for example, in the theory of BCH code and Reed–Solomon error correction codes.


2 Answers

Here are some more methods some of which are quite a bit faster (on my computer) than what has been posted so far.

The most important observation I think is that it really depends a lot on how many degrees you want. Exponentiation (which I believe is special cased for small integer exponents) only makes sense for small exponent ranges. The more exponents the better multiplication based approaches fare.

I'd like to highlight a multiply.accumulate based method (ma) which is similar to numpy's builtin approach but faster (and not because I skimped on checks - nnc, numpy-no-checks demonstrates this). For all but the smallest exponent ranges it is actually the fastest for me.

For reasons I do not understand, the numpy implementation does three things that are to the best of my knowledge slow and unnecessary: (1) It makes quite a few copies of the base vector. (2) It makes them non-contiguous. (3) It does the accumulation in-place which I believe forces buffering.

Another thing I'd like to mention is that the fastest for small ranges of ints (out_e_1 essentially a manual version of ma), is slowed down by a factor of more than two by the simple precaution of promoting to a larger dtype (safe_e_1 arguably a bit of a misnomer).

The broadcasting based methods are called bc_* where * indicates the broadcast axis (b for base, e for exp) 'cheat' means the result is noncontiguous.

Timings (best of 3):

rep=100 n_b=5000 n_e=4 b_tp=<class 'numpy.int32'> e_tp=<class 'numpy.int32'>
vander                0.16699657 ms
bc_b                  0.09595204 ms
bc_e                  0.07959786 ms
ma                    0.10755240 ms
nnc                   0.16459018 ms
out_e_1               0.02037535 ms
out_e_2               0.02656622 ms
safe_e_1              0.04652272 ms
safe_e_2              0.04081079 ms
cheat bc_e_cheat            0.04668466 ms
rep=100 n_b=5000 n_e=8 b_tp=<class 'numpy.int32'> e_tp=<class 'numpy.int32'>
vander                0.25086462 ms
bc_b             apparently failed
bc_e             apparently failed
ma                    0.15843041 ms
nnc                   0.24713077 ms
out_e_1          apparently failed
out_e_2          apparently failed
safe_e_1              0.15970622 ms
safe_e_2              0.19672418 ms
bc_e_cheat       apparently failed
rep=100 n_b=5000 n_e=4 b_tp=<class 'float'> e_tp=<class 'numpy.int32'>
vander                0.16225773 ms
bc_b                  0.53315020 ms
bc_e                  0.56200830 ms
ma                    0.07626799 ms
nnc                   0.16059748 ms
out_e_1               0.03653416 ms
out_e_2               0.04043702 ms
safe_e_1              0.04060494 ms
safe_e_2              0.04104209 ms
cheat bc_e_cheat            0.52966076 ms
rep=100 n_b=5000 n_e=8 b_tp=<class 'float'> e_tp=<class 'numpy.int32'>
vander                0.24542852 ms
bc_b                  2.03353578 ms
bc_e                  2.04281270 ms
ma                    0.11075758 ms
nnc                   0.24212880 ms
out_e_1               0.14809043 ms
out_e_2               0.19261359 ms
safe_e_1              0.15206112 ms
safe_e_2              0.19308420 ms
cheat bc_e_cheat            1.99176601 ms

Code:

import numpy as np
import types
from timeit import repeat

prom={np.dtype(np.int32): np.dtype(np.int64), np.dtype(float): np.dtype(float)}

def RI(k, N, dt, top=100):
    return np.random.randint(0, top if top else N, (k, N)).astype(dt)

def RA(k, N, dt, top=None):
    return np.add.outer(np.zeros((k,), int), np.arange(N)%(top if top else N)).astype(dt)

def RU(k, N, dt, top=100):
    return (np.random.random((k, N))*(top if top else N)).astype(dt)

def data(k, N_b, N_e, dt_b, dt_e, b_fun=RI, e_fun=RA):
    b = list(b_fun(k, N_b, dt_b))
    e = list(e_fun(k, N_e, dt_e))
    return b, e

def f_vander(b, e):
    return np.vander(b, len(e), increasing=True)

def f_bc_b(b, e):
    return b[:, None]**e

def f_bc_e(b, e):
    return np.ascontiguousarray((b**e[:, None]).T)

def f_ma(b, e):
    out = np.empty((len(b), len(e)), prom[b.dtype])
    out[:, 0] = 1
    np.multiply.accumulate(np.broadcast_to(b, (len(e)-1, len(b))), axis=0, out=out[:, 1:].T)
    return out

def f_nnc(b, e):
    out = np.empty((len(b), len(e)), prom[b.dtype])
    out[:, 0] = 1
    out[:, 1:] = b[:, None]
    np.multiply.accumulate(out[:, 1:], out=out[:, 1:], axis=1)
    return out

def f_out_e_1(b, e):
    out = np.empty((len(b), len(e)), b.dtype)
    out[:, 0] = 1
    out[:, 1] = b
    out[:, 2] = c = b*b
    for i in range(3, len(e)):
        c*=b
        out[:, i] = c
    return out

def f_out_e_2(b, e):
    out = np.empty((len(b), len(e)), b.dtype)
    out[:, 0] = 1
    out[:, 1] = b
    out[:, 2] = b*b
    for i in range(3, len(e)):
        out[:, i] = out[:, i-1] * b
    return out

def f_safe_e_1(b, e):
    out = np.empty((len(b), len(e)), prom[b.dtype])
    out[:, 0] = 1
    out[:, 1] = b
    out[:, 2] = c = (b*b).astype(prom[b.dtype])
    for i in range(3, len(e)):
        c*=b
        out[:, i] = c
    return out

def f_safe_e_2(b, e):
    out = np.empty((len(b), len(e)), prom[b.dtype])
    out[:, 0] = 1
    out[:, 1] = b
    out[:, 2] = b*b
    for i in range(3, len(e)):
        out[:, i] = out[:, i-1] * b
    return out

def f_bc_e_cheat(b, e):
    return (b**e[:, None]).T

for params in [(100, 5000, 4, np.int32, np.int32),
               (100, 5000, 8, np.int32, np.int32),
               (100, 5000, 4, float, np.int32),
               (100, 5000, 8, float, np.int32)]:
    k = params[0]
    dat = data(*params)
    ref = f_vander(dat[0][0], dat[1][0])
    print('rep={} n_b={} n_e={} b_tp={} e_tp={}'.format(*params))
    for name, func in list(globals().items()):
        if not name.startswith('f_') or not isinstance(func, types.FunctionType):
            continue
        try:
            assert np.allclose(ref, func(dat[0][0], dat[1][0]))
            if not func(dat[0][0], dat[1][0]).flags.c_contiguous:
                print('cheat', end=' ')
            print("{:16s}{:16.8f} ms".format(name[2:], np.min(repeat(
                'f(b.pop(), e.pop())', setup='b, e = data(*p)', globals={'f':func, 'data':data, 'p':params}, number=k)) * 1000 / k))
        except:
            print("{:16s} apparently failed".format(name[2:]))
like image 59
Paul Panzer Avatar answered Sep 30 '22 20:09

Paul Panzer


How about broadcasted exponentiation?

%timeit (x ** np.arange(N)[:, None]).T
43 µs ± 348 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Sanity check -

np.all((x ** np.arange(N)[:, None]).T == np.vander(x, N, increasing=True))
True

The caveat here is that this speedup is possible only if your input array x has a dtype of int. As @Warren Weckesser pointed out in a comment, the broadcasted exponentiation slows down for floating point arrays.


As for why np.vander is slow, take a look at the source code -

x = asarray(x)
if x.ndim != 1:
    raise ValueError("x must be a one-dimensional array or sequence.")
if N is None:
    N = len(x)

v = empty((len(x), N), dtype=promote_types(x.dtype, int))
tmp = v[:, ::-1] if not increasing else v

if N > 0:
    tmp[:, 0] = 1
if N > 1:
    tmp[:, 1:] = x[:, None]
    multiply.accumulate(tmp[:, 1:], out=tmp[:, 1:], axis=1)

return v

The function has to cater to a lot more use cases besides yours, so it uses a more generalized method of computation which is reliable, but slower (I'm specifically pointing to multiply.accumulate).


As a matter of interest, I found another way of computing the Vandermonde matrix, ending up with this:

%timeit x[:, None] ** np.arange(N)
150 µs ± 230 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

It does the same thing, but is so much slower. The answer lies in the fact that the operations are broadcast, but inefficiently.

On the flip side, for float arrays, this actually ends up performing the best.

like image 35
cs95 Avatar answered Sep 30 '22 20:09

cs95