The numpy. dot() function calculates the dot-product between two different vectors, and the numpy. sqrt() function is used to calculate the square root of a particular number. We can calculate the dot-product of the vector with itself and then take the square root of the result to determine the magnitude of the vector.
The built-in function abs() available in the Python library is used to calculate the magnitude of a complex number.
The magnitude of a vector formula is used to calculate the length for a given vector (say v) and is denoted as |v|. So basically, this quantity is the length between the initial point and endpoint of the vector.
To find a matrix or vector norm we use function numpy. linalg. norm() of Python library Numpy. This function returns one of the seven matrix norms or one of the infinite vector norms depending upon the value of its parameters.
The function you're after is numpy.linalg.norm
. (I reckon it should be in base numpy as a property of an array -- say x.norm()
-- but oh well).
import numpy as np
x = np.array([1,2,3,4,5])
np.linalg.norm(x)
You can also feed in an optional ord
for the nth order norm you want. Say you wanted the 1-norm:
np.linalg.norm(x,ord=1)
And so on.
If you are worried at all about speed, you should instead use:
mag = np.sqrt(x.dot(x))
Here are some benchmarks:
>>> import timeit
>>> timeit.timeit('np.linalg.norm(x)', setup='import numpy as np; x = np.arange(100)', number=1000)
0.0450878
>>> timeit.timeit('np.sqrt(x.dot(x))', setup='import numpy as np; x = np.arange(100)', number=1000)
0.0181372
EDIT: The real speed improvement comes when you have to take the norm of many vectors. Using pure numpy functions doesn't require any for loops. For example:
In [1]: import numpy as np
In [2]: a = np.arange(1200.0).reshape((-1,3))
In [3]: %timeit [np.linalg.norm(x) for x in a]
100 loops, best of 3: 4.23 ms per loop
In [4]: %timeit np.sqrt((a*a).sum(axis=1))
100000 loops, best of 3: 18.9 us per loop
In [5]: np.allclose([np.linalg.norm(x) for x in a],np.sqrt((a*a).sum(axis=1)))
Out[5]: True
Yet another alternative is to use the einsum
function in numpy for either arrays:
In [1]: import numpy as np
In [2]: a = np.arange(1200.0).reshape((-1,3))
In [3]: %timeit [np.linalg.norm(x) for x in a]
100 loops, best of 3: 3.86 ms per loop
In [4]: %timeit np.sqrt((a*a).sum(axis=1))
100000 loops, best of 3: 15.6 µs per loop
In [5]: %timeit np.sqrt(np.einsum('ij,ij->i',a,a))
100000 loops, best of 3: 8.71 µs per loop
or vectors:
In [5]: a = np.arange(100000)
In [6]: %timeit np.sqrt(a.dot(a))
10000 loops, best of 3: 80.8 µs per loop
In [7]: %timeit np.sqrt(np.einsum('i,i', a, a))
10000 loops, best of 3: 60.6 µs per loop
There does, however, seem to be some overhead associated with calling it that may make it slower with small inputs:
In [2]: a = np.arange(100)
In [3]: %timeit np.sqrt(a.dot(a))
100000 loops, best of 3: 3.73 µs per loop
In [4]: %timeit np.sqrt(np.einsum('i,i', a, a))
100000 loops, best of 3: 4.68 µs per loop
Fastest way I found is via inner1d. Here's how it compares to other numpy methods:
import numpy as np
from numpy.core.umath_tests import inner1d
V = np.random.random_sample((10**6,3,)) # 1 million vectors
A = np.sqrt(np.einsum('...i,...i', V, V))
B = np.linalg.norm(V,axis=1)
C = np.sqrt((V ** 2).sum(-1))
D = np.sqrt((V*V).sum(axis=1))
E = np.sqrt(inner1d(V,V))
print [np.allclose(E,x) for x in [A,B,C,D]] # [True, True, True, True]
import cProfile
cProfile.run("np.sqrt(np.einsum('...i,...i', V, V))") # 3 function calls in 0.013 seconds
cProfile.run('np.linalg.norm(V,axis=1)') # 9 function calls in 0.029 seconds
cProfile.run('np.sqrt((V ** 2).sum(-1))') # 5 function calls in 0.028 seconds
cProfile.run('np.sqrt((V*V).sum(axis=1))') # 5 function calls in 0.027 seconds
cProfile.run('np.sqrt(inner1d(V,V))') # 2 function calls in 0.009 seconds
inner1d is ~3x faster than linalg.norm and a hair faster than einsum
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With