Say I have two pdfs, e.g.:
from scipy import stats
pdf_y = stats.beta(5, 9).pdf
pdf_x = stats.beta(9, 5).pdf
I would like to compute their KL divergence. Before I reinvent the wheel, are there any builtins in the PyData eco-system for doing this?
KL divergence is available in scipy.stats.entropy. From the docstring
stats.entropy(pk, qk=None, base=None)
Calculate the entropy of a distribution for given probability values.
If only probabilities `pk` are given, the entropy is calculated as
``S = -sum(pk * log(pk), axis=0)``.
If `qk` is not None, then compute a relative entropy (also known as
Kullback-Leibler divergence or Kullback-Leibler distance)
``S = sum(pk * log(pk / qk), axis=0)``.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With