I have learnt that it is the randomness of the pixels. But please help with how this randomness is being calculated mathematically. And also how different images will have different entropy.
You may as well calculate the Shannon entropy straight from your img
. Just do:
import skimage.measure
entropy = skimage.measure.shannon_entropy(img)
If you want to see the maths behind:
import numpy as np
marg = np.histogramdd(np.ravel(img), bins = 256)[0]/img.size
marg = list(filter(lambda p: p > 0, np.ravel(marg)))
entropy = -np.sum(np.multiply(marg, np.log2(marg)))
First, marg
is the marginal distribution of the two dimensional grayscale image img
. bins
is set to 256 for an 8-bit image. Then you need to filter out the probabilities that are equal to zero and finally sum over the remaining elements np.multiply(marg, np.log2(marg))
, as defined by Shannon's entropy.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With