I don't really follow how they came up with the derivative equation. Could somebody please explain in some details or even a link to somewhere with sufficient math explanation?
Laplacian filter looks like
Laplacian filters are derivative filters used to find areas of rapid change (edges) in images. Since derivative filters are very sensitive to noise, it is common to smooth the image (e.g., using a Gaussian filter) before applying the Laplacian. This two-step process is call the Laplacian of Gaussian (LoG) operation.
Laplacian filters are derivative filters used to extract the vertical as well as horizontal edges from an image. This is how they separate themselves from the usual sobel filters. Sobel filters are single derivative filters, that means that they can only find edges in a single dimension.
Laplacian Filter (also known as Laplacian over Gaussian Filter (LoG)), in Machine Learning, is a convolution filter used in the convolution layer to detect edges in input. Ever thought how the computer extracts a particular object from the scenery.
Laplacian Operator is also known as a derivative operator which is used to find edges in an image. The major difference between Laplacian and other operators like Prewitt, Sobel, Robinson and Kirsch is that these all are first order derivative masks but Laplacian is a second order derivative mask.
Monsieur Laplace came up with this equation. This is simply the definition of the Laplace operator: the sum of second order derivatives (you can also see it as the trace of the Hessian matrix).
The second equation you show is the finite difference approximation to a second derivative. It is the simplest approximation you can make for discrete (sampled) data. The derivative is defined as the slope (equation from Wikipedia):
In a discrete grid, the smallest h
is 1. Thus the derivative is f(x+1)-f(x)
. This derivative, because it uses the pixel at x
and the one to the right, introduces a half-pixel shift (i.e. you compute the slope in between these two pixels). To get to the 2nd order derivative, simply compute the derivative on the result of the derivative:
f'(x) = f(x+1) - f(x)
f'(x+1) = f(x+2) - f(x+1)
f"(x) = f'(x+1) - f'(x)
= f(x+2) - f(x+1) - f(x+1) + f(x)
= f(x+2) - 2*f(x+1) + f(x)
Because each derivative introduces a half-pixel shift, the 2nd order derivative ends up with a 1-pixel shift. So we can shift the output left by one pixel, leading to no bias. This leads to the sequence f(x+1)-2*f(x)+f(x-1)
.
Computing this 2nd order derivative is the same as convolving with a filter [1,-2,1]
.
Applying this filter, and also its transposed, and adding the results, is equivalent to convolving with the kernel
[ 0, 1, 0 [ 0, 0, 0 [ 0, 1, 0
1,-4, 1 = 1,-2, 1 + 0,-2, 0
0, 1, 0 ] 0, 0, 0 ] 0, 1, 0 ]
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With