Is there an easy way to calculate a running variance filter on an image using Python/NumPy/Scipy? By running variance image I mean the result of calculating sum((I - mean(I))^2)/nPixels for each sub-window I in the image.
Since the images are quite large (12000x12000 pixels), I want to avoid the overhead of converting the arrays between formats just to be able to use a different library and then convert back.
I guess I could do this manually by finding the mean using something like
kernel = np.ones((winSize, winSize))/winSize**2
image_mean = scipy.ndimage.convolve(image, kernel)
diff = (image - image_mean)**2
# Calculate sum over winSize*winSize sub-images
# Subsample result
but it would be much nicer to have something like the stdfilt-function from Matlab.
Can anyone point me in the direction of a library that has this functionality AND supports numpy arrays, or hint at/provide a way to do this in NumPy/SciPy?
These are indeed the correct way to calculate the mean and variance over all the pixels of your image. It is not impossible that your variance is larger than the mean as both are defined in the following way: mean = sum(x)/length(x) variance = sum((x - mean(x)). ^2)/(length(x) - 1);
The variance is the average of the squared deviations from the mean, i.e., var = mean(x) , where x = abs(a - a. mean())**2 . The mean is typically calculated as x. sum() / N , where N = len(x) .
Coding a stdev() Function in Python sqrt() to take the square root of the variance. With this new implementation, we can use ddof=0 to calculate the standard deviation of a population, or we can use ddof=1 to estimate the standard deviation of a population using a sample of data.
Simpler solution and also faster: use SciPy's ndimage.uniform_filter
import numpy as np
from scipy import ndimage
rows, cols = 500, 500
win_rows, win_cols = 5, 5
img = np.random.rand(rows, cols)
win_mean = ndimage.uniform_filter(img, (win_rows, win_cols))
win_sqr_mean = ndimage.uniform_filter(img**2, (win_rows, win_cols))
win_var = win_sqr_mean - win_mean**2
The "stride trick" is beautiful trick, but 4 slower and not that readable.
the generic_filter
is 20 times slower than the strides...
You can use numpy.lib.stride_tricks.as_strided
to get a windowed view of your image:
import numpy as np
from numpy.lib.stride_tricks import as_strided
rows, cols = 500, 500
win_rows, win_cols = 5, 5
img = np.random.rand(rows, cols)
win_img = as_strided(img, shape=(rows-win_rows+1, cols-win_cols+1,
win_rows, win_cols),
strides=img.strides*2)
And now win_img[i, j]
is the (win_rows, win_cols)
array with the top left corner at position [i, j]
:
>>> img[100:105, 100:105]
array([[ 0.34150754, 0.17888323, 0.67222354, 0.9020784 , 0.48826682],
[ 0.68451774, 0.14887515, 0.44892615, 0.33352743, 0.22090103],
[ 0.41114758, 0.82608407, 0.77190533, 0.42830363, 0.57300759],
[ 0.68435626, 0.94874394, 0.55238567, 0.40367885, 0.42955156],
[ 0.59359203, 0.62237553, 0.58428725, 0.58608119, 0.29157555]])
>>> win_img[100,100]
array([[ 0.34150754, 0.17888323, 0.67222354, 0.9020784 , 0.48826682],
[ 0.68451774, 0.14887515, 0.44892615, 0.33352743, 0.22090103],
[ 0.41114758, 0.82608407, 0.77190533, 0.42830363, 0.57300759],
[ 0.68435626, 0.94874394, 0.55238567, 0.40367885, 0.42955156],
[ 0.59359203, 0.62237553, 0.58428725, 0.58608119, 0.29157555]])
You have to be careful, though, with not converting your windowed view of the image, into a windowed copy of it: in my example that would require 25 times more storage. I believe numpy 1.7 lets you select more than one axis, so you could then simply do:
>>> np.var(win_img, axis=(-1, -2))
I am stuck with numpy 1.6.2, so I cannot test that. The other option, which may fail with not-so-large windows, would be to do, if I remember my math correctly:
>>> win_mean = np.sum(np.sum(win_img, axis=-1), axis=-1)/win_rows/win_cols
>>> win_sqr_mean = np.sum(np.sum(win_img**2, axis=-1), axis=-1)/win_rows/win_cols
>>> win_var = win_sqr_mean - win_mean**2
And now win_var
is an array of shape
>>> win_var.shape
(496, 496)
and win_var[i, j]
holds the variance of the (5, 5)
window with top left corner at [i, j]
.
After a bit of optimization we came up with this function for a generic 3D image:
def variance_filter( img, VAR_FILTER_SIZE ):
from numpy.lib.stride_tricks import as_strided
WIN_SIZE=(2*VAR_FILTER_SIZE)+1
if ~ VAR_FILTER_SIZE%2==1:
print 'Warning, VAR_FILTER_SIZE must be ODD Integer number '
# hack -- this could probably be an input to the function but Alessandro is lazy
WIN_DIMS = [ WIN_SIZE, WIN_SIZE, WIN_SIZE ]
# Check that there is a 3D image input.
if len( img.shape ) != 3:
print "\t variance_filter: Are you sure that you passed me a 3D image?"
return -1
else:
DIMS = img.shape
# Set up a windowed view on the data... this will have a border removed compared to the img_in
img_strided = as_strided(img, shape=(DIMS[0]-WIN_DIMS[0]+1, DIMS[1]-WIN_DIMS[1]+1, DIMS[2]-WIN_DIMS[2]+1, WIN_DIMS[0], WIN_DIMS[1], WIN_DIMS[2] ), strides=img.strides*2)
# Calculate variance, vectorially
win_mean = numpy.sum(numpy.sum(numpy.sum(img_strided, axis=-1), axis=-1), axis=-1) / (WIN_DIMS[0]*WIN_DIMS[1]*WIN_DIMS[2])
# As per http://en.wikipedia.org/wiki/Variance, we are removing the mean from every window,
# then squaring the result.
# Casting to 64 bit float inside, because the numbers (at least for our images) get pretty big
win_var = numpy.sum(numpy.sum(numpy.sum((( img_strided.T.astype('<f8') - win_mean.T.astype('<f8') )**2).T, axis=-1), axis=-1), axis=-1) / (WIN_DIMS[0]*WIN_DIMS[1]*WIN_DIMS[2])
# Prepare an output image of the right size, in order to replace the border removed with the windowed view call
out_img = numpy.zeros( DIMS, dtype='<f8' )
# copy borders out...
out_img[ WIN_DIMS[0]/2:DIMS[0]-WIN_DIMS[0]+1+WIN_DIMS[0]/2, WIN_DIMS[1]/2:DIMS[1]-WIN_DIMS[1]+1+WIN_DIMS[1]/2, WIN_DIMS[2]/2:DIMS[2]-WIN_DIMS[2]+1+WIN_DIMS[2]/2, ] = win_var
# output
return out_img.astype('>f4')
You can use scipy.ndimage.generic_filter
. I can't test with matlab, but perhaps this gives you what you're looking for:
import numpy as np
import scipy.ndimage as ndimage
subs = 10 # this is the size of the (square) sub-windows
img = np.random.rand(500, 500)
img_std = ndimage.filters.generic_filter(img, np.std, size=subs)
You can make the sub-windows of arbitrary sizes using the footprint
keyword. See this question for an example.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With