I am trying to estimate the entropy of Random Variables (RVs), which involves a calculation of step: p_X * log(p_X)
.
For example,
import numpy as np
X = np.random.rand(100)
binX = np.histogram(X, 10)[0] #create histogram with 10 bins
p_X = binX / np.sum(binX)
ent_X = -1 * np.sum(p_X * np.log(p_X))
Sometimes p_X
shall be zero which mathematically make the whole term as zero. But python makes p_X * np.log(p_X)
as NaN
and makes the whole summation as NaN
. Is there any way to manage (without any explicit checking for NaN) making p_X * np.log(p_X)
to give zero whenever p_X
is zero? Any insight and correction is appreciated and Thanks in advance:)
If you have scipy
, use scipy.special.xlogy(p_X,p_X)
. Not only does it solve your problem, as an added benefit it is also a bit faster than p_X*np.log(p_X)
.
In your case you can use nansum
since adding 0
in sum
is the same thing as ignoring a NaN
:
ent_X = -1 * np.nansum(p_X * np.log(p_X))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With