I built a wrapped bivariate gaussian distribution in Python using the equation given here: http://www.aos.wisc.edu/~dvimont/aos575/Handouts/bivariate_notes.pdf However, I don't understand why my distribution fails to sum to 1 despite having incorporated a normalization constant.
For a U x U lattice,
import numpy as np
from math import *
U = 60
m = np.arange(U)
i = m.reshape(U,1)
j = m.reshape(1,U)
sigma = 0.1
ii = np.minimum(i, U-i)
jj = np.minimum(j, U-j)
norm_constant = 1/(2*pi*sigma**2)
xmu = (ii-0)/sigma; ymu = (jj-0)/sigma
rhs = np.exp(-.5 * (xmu**2 + ymu**2))
ker = norm_constant * rhs
>> ker.sum() # area of each grid is 1
15.915494309189533
I'm certain there's fundamentally missing in the way I'm thinking about this and suspect that some sort of additional normalization is needed, although I can't reason my way around it.
UPDATE:
Thanks to others' insightful suggestions, I rewrote my code to apply L1 normalization to kernel. However, it appears that, in the context of 2D convolution via FFt, keeping the range as [0, U] is able to still return a convincing result:
U = 100
Ukern = np.copy(U)
#Ukern = 15
m = np.arange(U)
i = m.reshape(U,1)
j = m.reshape(1,U)
sigma = 2.
ii = np.minimum(i, Ukern-i)
jj = np.minimum(j, Ukern-j)
xmu = (ii-0)/sigma; ymu = (jj-0)/sigma
ker = np.exp(-.5 * (xmu**2 + ymu**2))
ker /= np.abs(ker).sum()
''' Point Density '''
ido = np.random.randint(U, size=(10,2)).astype(np.int)
og = np.zeros((U,U))
np.add.at(og, (ido[:,0], ido[:,1]), 1)
''' Convolution via FFT and inverse-FFT '''
v1 = np.fft.fft2(ker)
v2 = np.fft.fft2(og)
v0 = np.fft.ifft2(v2*v1)
dd = np.abs(v0)
plt.plot(ido[:,1], ido[:,0], 'ko', alpha=.3)
plt.imshow(dd, origin='origin')
plt.show()
On the other hand, sizing the kernel using the commented-out line gives this incorrect plot:
That the sum of two independent Gaussian random variables is Gaussian follows immediately from the fact that Gaussians are closed under multiplication (or convolution).
In fluorescence microscopy a 2D Gaussian function is used to approximate the Airy disk, describing the intensity distribution produced by a point source. In signal processing they serve to define Gaussian filters, such as in image processing where 2D Gaussians are used for Gaussian blurs.
We can use the randn() NumPy function to generate a sample of random numbers drawn from a Gaussian distribution. There are two key parameters that define any Gaussian distribution; they are the mean and the standard deviation.
Brief Description. The Gaussian smoothing operator is a 2-D convolution operator that is used to `blur' images and remove detail and noise. In this sense it is similar to the mean filter, but it uses a different kernel that represents the shape of a Gaussian (`bell-shaped') hump.
NOTE: As stated in the comments bellow, this solution is only valid if you are trying to build a gaussian convolution kernel (or gaussian filter) for image processing purposes. It is not a properly normalized gaussian density function, but it is the form that is used to remove gaussian noise from images.
You are missing the L1 normalization:
ker /= np.abs(ker).sum()
Which will make your kernel behave like an actual density function. Since the grid you have can vary a lot in the magnitude of its values, the above normalization step is needed.
In fact, the gaussian nornalization constant you have could be ommited to just use the L1 norm above. If I'm not worng, you are trying to create a gaussian convolution, and th above is the usual normalization tecnique applied to it.
Your second mistake, as @Praveen has stated, is that you need to sample the grid from [-U//2, U//2]
. You can do that as:
i, j = np.mgrid[-U//2:U//2+1, -U//2:U//2+1]
Last, if what you are trying to do is to build a gaussian filter, the size of the kernel is usually estimated from sigma (to avoid zeroes far from the centre) as U//2 <= t * sigma
, where t
is a truncation parameter usually set t=3
or t=4
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With