Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Converting bsxfun with @times to numpy

This is the code I have in Octave:

sum(bsxfun(@times, X*Y, X), 2)

The bsxfun part of the code produces element-wise multiplication so I thought that numpy.multiply(X*Y, X) would do the trick but I got an exception. When I did a bit of research I found that element-wise multiplication won't work on Python arrays (specifically if X and Y are of type "numpy.ndarray"). So I was wondering if anyone can explain this a bit more -- i.e. would type casting to a different type of object work? The Octave code works so I know I don't have a linear algebra mistake. I'm assuming that bsxfun and numpy.multiply are not actually equivalent but I'm not sure why so any explanations would be great.

I was able to find a website! that gives Octave to Matlab function conversions but it didn't seem to be help in my case.

like image 279
eTothEipiPlus1 Avatar asked May 08 '14 14:05

eTothEipiPlus1


People also ask

How do you multiply a 1D and 2D array in Python?

To find the matrix product of a 2D and a 1D array, use the numpy. matmul() method in Python Numpy. If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.

Is Bsxfun faster than for loop?

for loop is very slow. vectorization is fastest for small first dimension, then equally fast as bsxfun. bsxfun is fastest if one needs to subset a medium sized array (n x m >100 x 1000), but see update below!

Is Bsxfun faster?

There are two important reasons bsxfun is faster: (1) the calculation happens in compiled code, which means that the actual replication of the array never happens, and (2) bsxfun is one of the multithreaded MATLAB functions.

What does bsxfun mean in MATLAB?

The bsxfun function expands the vectors into matrices of the same size, which is an efficient way to evaluate fun for many combinations of the inputs.


2 Answers

bsxfun in Matlab stand for binary singleton expansion, in numpy it's called broadcasting and should happen automatically. The solution will depend on the dimensions of your X, i.e. is it a row or column vector but this answer shows one way to do it:

How to multiply numpy 2D array with numpy 1D array?

I think that the issue here is that broadcasting requires one of the dimensions to be 1 and, unlike Matlab, numpy seems to differentiate between a 1 dimensional 2 element vector and a 2 dimensional 2 element, i.e. the difference between a matrix of shape (2,) and of shape (2,1), you need the latter for broadcasting to happen.

like image 62
Dan Avatar answered Sep 29 '22 06:09

Dan


For those who don't know Numpy, I think it's worth pointing out that the equivalent of Octave's (and Matlab's) * operator (matrix multiplication) is numpy.dot (and, debatably, numpy.outer). Numpy's * operator is similar to bsxfun(@times,...) in Octave, which is itself a generalization of .*.

In Octave, when applying bsxfun, there are implicit singleton dimensions to the right of the "true" size of the operands; that is, an n1 x n2 x n3 array can be considered as n1 x n2 x n3 x 1 x 1 x 1 x.... In Numpy, the implicit singleton dimensions are to the left; so an m1 x m2 x m3 can be considered as ... x 1 x 1 x m1 x m2 x m3. This matters when considering operand sizes: in Octave, bsxfun(@times,a,b) will work if a is 2 x 3 x 4 and b is 2 x 3. In Numpy one could not multiply two such arrays, but one could multiply a 2 x 3 x 4 and a 3 x 4 array.

Finally, bsxfun(@times, X*Y, X) in Octave will probably look something like numpy.dot(X,Y) * X. There are still some gotchas: for instance, if you're expecting an outer product (that is, in Octave X is a column vector, Y a row vector), you could look at using numpy.outer instead, or be careful about the shape of X and Y.

like image 26
Rory Yorke Avatar answered Sep 29 '22 06:09

Rory Yorke