my aim is to create and visualize the partial derivatives of a image (2D). I´ll do this with the first finite central difference equation wikipedia .
the partial derivative of F with respect to x is
df(x,y)/dx=f(x+1,y)-f(x-1,y)
we can write this as a convolve kernel H=[-1,0,1] and should get the same result by convolve the image with the kernel
dfx=F*H
Here is my source-code:
from numpy import *
from pylab import *
from PIL import *
from scipy import signal as sg
#create artifical image with constant positive slope
myImage=zeros((5,5),dtype=float)
for y in range(5):
for x in range(5):
myImage[y,x]=y+x
at first I create the first central finite difference for x and y by the convolve2d - function from the scipy modul (the short way)
kernel=array([[-1,0,1]]) #create first central finite difference kernel
pdx=sg.convolve2d(myImage,kernel,"same") #dI(x,y)/dx
and now I create the same using loops
H,W=myImage.shape[0:2]
pdx=zeros((H,W),dtype=float)
for y in range(1,H-1):
for x in range(1,W-1):
pdx.itemset(y,x,im.item(y,x+1)-im.item(y,x-1))
Let´s see the results:
pdx - kernel method by convolve2d
array([[-1., -2., -2., -2., 3.],
[-2., -2., -2., -2., 4.],
[-3., -2., -2., -2., 5.],
[-4., -2., -2., -2., 6.],
[-5., -2., -2., -2., 7.]])
pdx - finite difference by loops
array([[ 0., 0., 0., 0., 0.],
[ 0., 2., 2., 2., 0.],
[ 0., 2., 2., 2., 0.],
[ 0., 2., 2., 2., 0.],
[ 0., 0., 0., 0., 0.]])
I´m confused about the results. Why has the kernel method a negative slope ? To get the same results i have to flip the kernel to H=[1,0,-1], but this is mathematically not correct. Can someone help me ?
The reason for the difference is two-fold:
scipy.signal
Also, for what you're doing, you almost definitely want scipy.ndimage.convolve
instead of scipy.signal.convolve2d
. The defaults for ndimage
are set up to work with images, and it's more efficient for limited-precision integer data, which is the norm for images.
To reproduce your loop version with scipy.signal
, you'd need to reverse the kernel, as you've noticed.
This is the mathematical definition of a convolution. The kernel is flipped before being "swept by". For example: http://en.wikipedia.org/wiki/File:Convolution3.PNG
Second, scipy.signal.convolve2d
pads the boundaries with zeros by default, while you're simply not operating on the boundaries. To reproduce your boundary conditions with scipy.signal.convolve2d
, use boundary='symm'
(For this specific kernel, anyway... In general, you're just ignoring the boundaries, which convolve2d
doesn't have an option for.)
Finally, for your purposes (image processing), it's far more efficient to use scipy.ndimage.convolve
. In this case (a 1D kernel), it's most efficient to use scipy.ndimage.convolve1d
. E.g. scipy.ndimage.convolve1d(data, [1, 0, -1], axis=0)
Convolution of f and g is the integral of f(x') g(x - x') over dx'. The effect of this is that the kernel is flipped. Either use the flipped kernel, or use scipy.ndimage.correlate
which does f(x') g(x + x'), so the kernel is kept in the same direction as input.
See Convolution for more information, and for some info on finite differences, see Python finite difference functions?.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With