Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

2D Gaussian Fit for intensities at certain coordinates in Python

I have a set of coordinates (x, y, z(x, y)) which describe intensities (z) at coordinates x, y. For a set number of these intensities at different coordinates, I need to fit a 2D Gaussian that minimizes the mean squared error. The data is in numpy matrices and for each fitting session I will have either 4, 9, 16 or 25 coordinates. Ultimately I just need to get the central position of the gaussian (x_0, y_0) that has smallest MSE. All of the examples that I have found use scipy.optimize.curve_fit but the input data they have is over an entire mesh rather than a few coordinates. Any help would be appreciated.

like image 660
blah1234 Avatar asked Dec 18 '14 05:12

blah1234


1 Answers

Introduction

There are multiple ways to approach this. You can use non-linear methods (e.g. scipy.optimize.curve_fit), but they'll be slow and aren't guaranteed to converge. You can linearize the problem (fast, unique solution), but any noise in the "tails" of the distribution will cause issues. There are actually a few tricks you can apply to this particular case to avoid the latter issue. I'll show some examples, but I don't have time to demonstrate all of the "tricks" right now.

Just as a side note, a general 2D guassian has 6 parameters, so you won't be able to fully fit things with 4 points. However, it sounds like you might be assuming that there's no covariance between x and y and that the variances are the same in each direction (i.e. a perfectly "round" bell curve). If that's the case, then you only need four parameters. If you know the amplitude of the guassian, you'll only need three. However, I'm going to start with the general solution, and you can simplify it later on, if you want to.

For the moment, let's focus on solving this problem using non-linear methods (e.g. scipy.optimize.curve_fit).


The general equation for a 2D guassian is (directly from wikipedia):

enter image description here

where:

enter image description here is essentially 0.5 over the covariance matrix, A is the amplitude, and (X₀, Y₀) is the center


Generate simplified sample data

Let's write the equation above out:

import numpy as np
import matplotlib.pyplot as plt

def gauss2d(x, y, amp, x0, y0, a, b, c):
    inner = a * (x - x0)**2 
    inner += 2 * b * (x - x0)**2 * (y - y0)**2
    inner += c * (y - y0)**2
    return amp * np.exp(-inner)

And then let's generate some example data. To start with, we'll generate some data that will be easy to fit:

np.random.seed(1977) # For consistency
x, y = np.random.random((2, 10))
x0, y0 = 0.3, 0.7
amp, a, b, c = 1, 2, 3, 4

zobs = gauss2d(x, y, amp, x0, y0, a, b, c)

fig, ax = plt.subplots()
scat = ax.scatter(x, y, c=zobs, s=200)
fig.colorbar(scat)
plt.show()

enter image description here

Note that we haven't added any noise, and the center of the distribution is within the range that we have data (i.e. center at 0.3, 0.7 and a scatter of x,y observations between 0 and 1). For the moment, let's stick with this, and then we'll see what happens when we add noise and shift the center.


Non-linear fitting

To start with, let's use scpy.optimize.curve_fit to preform a non-linear least-squares fit to the gaussian function. (On a side note, you can play around with the exact minimization algorithm by using some of the other functions in scipy.optimize.)

The scipy.optimize functions expect a slightly different function signature than the one we originally wrote above. We could write a wrapper to "translate", but let's just re-write the gauss2d function instead:

def gauss2d(xy, amp, x0, y0, a, b, c):
    x, y = xy
    inner = a * (x - x0)**2
    inner += 2 * b * (x - x0)**2 * (y - y0)**2
    inner += c * (y - y0)**2
    return amp * np.exp(-inner)

All we did was have the function expect the independent variables (x & y) as a single 2xN array.

Now we need to make an initial guess at what the guassian curve's parameters actually are. This is optional (the default is all ones, if I recall correctly), but you're likely to have problems converging if 1, 1 is not particularly close to the "true" center of the gaussian curve. For that reason, we'll use the x and y values of our largest observed z-value as a starting point for the center. I'll leave the rest of the parameters as 1, but if you know that they're likely to consistently be significantly different, change them to something more reasonable.

Here's the full, stand-alone example:

import numpy as np
import scipy.optimize as opt
import matplotlib.pyplot as plt

def main():
    x0, y0 = 0.3, 0.7
    amp, a, b, c = 1, 2, 3, 4
    true_params = [amp, x0, y0, a, b, c]
    xy, zobs = generate_example_data(10, true_params)
    x, y = xy

    i = zobs.argmax()
    guess = [1, x[i], y[i], 1, 1, 1]
    pred_params, uncert_cov = opt.curve_fit(gauss2d, xy, zobs, p0=guess)

    zpred = gauss2d(xy, *pred_params)
    print 'True parameters: ', true_params
    print 'Predicted params:', pred_params
    print 'Residual, RMS(obs - pred):', np.sqrt(np.mean((zobs - zpred)**2))

    plot(xy, zobs, pred_params)
    plt.show()

def gauss2d(xy, amp, x0, y0, a, b, c):
    x, y = xy
    inner = a * (x - x0)**2
    inner += 2 * b * (x - x0)**2 * (y - y0)**2
    inner += c * (y - y0)**2
    return amp * np.exp(-inner)

def generate_example_data(num, params):
    np.random.seed(1977) # For consistency
    xy = np.random.random((2, num))

    zobs = gauss2d(xy, *params)
    return xy, zobs

def plot(xy, zobs, pred_params):
    x, y = xy
    yi, xi = np.mgrid[:1:30j, -.2:1.2:30j]
    xyi = np.vstack([xi.ravel(), yi.ravel()])

    zpred = gauss2d(xyi, *pred_params)
    zpred.shape = xi.shape

    fig, ax = plt.subplots()
    ax.scatter(x, y, c=zobs, s=200, vmin=zpred.min(), vmax=zpred.max())
    im = ax.imshow(zpred, extent=[xi.min(), xi.max(), yi.max(), yi.min()],
                   aspect='auto')
    fig.colorbar(im)
    ax.invert_yaxis()
    return fig

main()

enter image description here

In this case, we exactly(ish) recover our original "true" parameters.

True parameters:  [1, 0.3, 0.7, 2, 3, 4]
Predicted params: [ 1.   0.3  0.7  2.   3.   4. ]
Residual, RMS(obs - pred): 1.01560615193e-16

As we'll see in a second, this won't always be the case...


Adding Noise

Let's add some noise to our observations. All I've done here is change the generate_example_data function:

def generate_example_data(num, params):
    np.random.seed(1977) # For consistency
    xy = np.random.random((2, num))

    noise = np.random.normal(0, 0.3, num)
    zobs = gauss2d(xy, *params) + noise
    return xy, zobs

However, the result looks quite different:

enter image description here

And as far as the parameters go:

True parameters:  [1, 0.3, 0.7, 2, 3, 4]
Predicted params: [  1.129    0.263   0.750   1.280   32.333   10.103  ]
Residual, RMS(obs - pred): 0.152444640098

The predicted center hasn't changed much, but the b and c parameters have changed quite a bit.

If we change the center of the function to somewhere slightly outside of our scatter of points:

x0, y0 = -0.3, 1.1

We'll wind up with complete nonsense as a result in the presence of noise! (It still works correctly without noise.)

True parameters:  [1, -0.3, 1.1, 2, 3, 4]
Predicted params: [  0.546  -0.939   0.857  -0.488  44.069  -4.136]
Residual, RMS(obs - pred): 0.235664449826

enter image description here

This is a common problem when fitting a function that decays to zero. Any noise in the "tails" can result in a very poor result. There are a number of strategies to deal with this. One of the easiest is to weight the inversion by the observed z-values. Here's an example for the 1D case: (focusing on linearized the problem) How can I perform a least-squares fitting over multiple data sets fast? If I have time later, I'll add an example of this for the 2D case.

like image 191
Joe Kington Avatar answered Oct 07 '22 15:10

Joe Kington