To find a matrix or vector norm we use function numpy. linalg. norm() of Python library Numpy. This function returns one of the seven matrix norms or one of the infinite vector norms depending upon the value of its parameters.
This function is able to return one of eight different matrix norms, or one of an infinite number of vector norms (described below), depending on the value of the ord parameter.
Given an M * N matrix, the task is to find the Frobenius Norm of the matrix. The Frobenius Norm of a matrix is defined as the square root of the sum of the squares of the elements of the matrix. Approach: Find the sum of squares of the elements of the matrix and then print the square root of the calculated value.
Note that, as perimosocordiae shows, as of NumPy version 1.9, np.linalg.norm(x, axis=1)
is the fastest way to compute the L2-norm.
If you are computing an L2-norm, you could compute it directly (using the axis=-1
argument to sum along rows):
np.sum(np.abs(x)**2,axis=-1)**(1./2)
Lp-norms can be computed similarly of course.
It is considerably faster than np.apply_along_axis
, though perhaps not as convenient:
In [48]: %timeit np.apply_along_axis(np.linalg.norm, 1, x)
1000 loops, best of 3: 208 us per loop
In [49]: %timeit np.sum(np.abs(x)**2,axis=-1)**(1./2)
100000 loops, best of 3: 18.3 us per loop
Other ord
forms of norm
can be computed directly too (with similar speedups):
In [55]: %timeit np.apply_along_axis(lambda row:np.linalg.norm(row,ord=1), 1, x)
1000 loops, best of 3: 203 us per loop
In [54]: %timeit np.sum(abs(x), axis=-1)
100000 loops, best of 3: 10.9 us per loop
Resurrecting an old question due to a numpy update. As of the 1.9 release, numpy.linalg.norm
now accepts an axis
argument. [code, documentation]
This is the new fastest method in town:
In [10]: x = np.random.random((500,500))
In [11]: %timeit np.apply_along_axis(np.linalg.norm, 1, x)
10 loops, best of 3: 21 ms per loop
In [12]: %timeit np.sum(np.abs(x)**2,axis=-1)**(1./2)
100 loops, best of 3: 2.6 ms per loop
In [13]: %timeit np.linalg.norm(x, axis=1)
1000 loops, best of 3: 1.4 ms per loop
And to prove it's calculating the same thing:
In [14]: np.allclose(np.linalg.norm(x, axis=1), np.sum(np.abs(x)**2,axis=-1)**(1./2))
Out[14]: True
Much faster than the accepted answer is using NumPy's einsum,
numpy.sqrt(numpy.einsum('ij,ij->i', a, a))
Note the log-scale:
Code to reproduce the plot:
import numpy
import perfplot
def sum_sqrt(a):
return numpy.sqrt(numpy.sum(numpy.abs(a) ** 2, axis=-1))
def apply_norm_along_axis(a):
return numpy.apply_along_axis(numpy.linalg.norm, 1, a)
def norm_axis(a):
return numpy.linalg.norm(a, axis=1)
def einsum_sqrt(a):
return numpy.sqrt(numpy.einsum("ij,ij->i", a, a))
b = perfplot.bench(
setup=lambda n: numpy.random.rand(n, 3),
kernels=[sum_sqrt, apply_norm_along_axis, norm_axis, einsum_sqrt],
n_range=[2 ** k for k in range(20)],
xlabel="len(a)",
)
b.save("out.png")
Try the following:
In [16]: numpy.apply_along_axis(numpy.linalg.norm, 1, a)
Out[16]: array([ 5.38516481, 1.41421356, 5.38516481])
where a
is your 2D array.
The above computes the L2 norm. For a different norm, you could use something like:
In [22]: numpy.apply_along_axis(lambda row:numpy.linalg.norm(row,ord=1), 1, a)
Out[22]: array([9, 2, 9])
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With