I have a 3D image with dimensions rows x cols x deps. For every voxel in the image, I have computed a 3x3 real symmetric matrix. They are stored in the array D, which therefore has shape (rows, cols, deps, 6).
D stores the 6 unique elements of the 3x3 symmetric matrix for every voxel in my image. I need to find the Moore-Penrose pseudo inverse of all row*cols*deps matrices simultaneously/in vectorized code (looping through every image voxel and inverting is far too slow in Python).
Some of these 3x3 symmetric matrices are non-singular, and I can find their inverses, in vectorized code, using the analytical formula for the true inverse of a non-singular 3x3 symmetric matrix, and I've done that.
However, for those matrices that ARE singular (and there are sure to be some) I need the Moore-Penrose pseudo inverse. I could derive an analytical formula for the MP of a real, singular, symmetric 3x3 matrix, but it's a really nasty/lengthy formula, and would therefore involve a VERY large number of (element-wise) matrix arithmetic and quite a bit of confusing looking code.
Hence, I would like to know if there is a way to simultaneously find the MP pseudo inverse for all these matrices at once numerically. Is there a way to do this?
Gratefully, GF
NumPy 1.8 included linear algebra gufuncs, which do exactly what you are after. While np.linalg.pinv
is not gufunc-ed, np.linalg.svd
is, and behind the scenes that is the function that gets called. So you can define your own gupinv
function, based on the source code of the original function, as follows:
def gu_pinv(a, rcond=1e-15):
a = np.asarray(a)
swap = np.arange(a.ndim)
swap[[-2, -1]] = swap[[-1, -2]]
u, s, v = np.linalg.svd(a)
cutoff = np.maximum.reduce(s, axis=-1, keepdims=True) * rcond
mask = s > cutoff
s[mask] = 1. / s[mask]
s[~mask] = 0
return np.einsum('...uv,...vw->...uw',
np.transpose(v, swap) * s[..., None, :],
np.transpose(u, swap))
And you can now do things like:
a = np.random.rand(50, 40, 30, 6)
b = np.empty(a.shape[:-1] + (3, 3), dtype=a.dtype)
# Expand the unique items into a full symmetrical matrix
b[..., 0, :] = a[..., :3]
b[..., 1:, 0] = a[..., 1:3]
b[..., 1, 1:] = a[..., 3:5]
b[..., 2, 1:] = a[..., 4:]
# make matrix at [1, 2, 3] singular
b[1, 2, 3, 2] = b[1, 2, 3, 0] + b[1, 2, 3, 1]
# Find all the pseudo-inverses
pi = gu_pinv(b)
And of course the results are correct, both for singular and non-singular matrices:
>>> np.allclose(pi[0, 0, 0], np.linalg.pinv(b[0, 0, 0]))
True
>>> np.allclose(pi[1, 2, 3], np.linalg.pinv(b[1, 2, 3]))
True
And for this example, with 50 * 40 * 30 = 60,000
pseudo-inverses calculated:
In [2]: %timeit pi = gu_pinv(b)
1 loops, best of 3: 422 ms per loop
Which is really not that bad, although it is noticeably (4x) slower than simply calling np.linalg.inv
, but this of course fails to properly handle the singular arrays:
In [8]: %timeit np.linalg.inv(b)
10 loops, best of 3: 98.8 ms per loop
EDIT: See @Jaime's answer. Only the discussion in the comments to this answer is useful now, and only for the specific problem at hand.
You can do this matrix by matrix, using scipy
, that provides pinv
(link) to calculate the Moore-Penrose pseudo inverse. An example follows:
from scipy.linalg import det,eig,pinv
from numpy.random import randint
#generate a random singular matrix M first
while True:
M = randint(0,10,9).reshape(3,3)
if det(M)==0:
break
M = M.astype(float)
#this is the method you need
MPpseudoinverse = pinv(M)
This does not exploit the fact that the matrix is symmetric though. You may also want to try the version of pinv
exposed by numpy, that is supposedely faster, and different. See this post.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With