Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Sobel filter in C/C++ using OpenGL ES

I'd rather not recreate the wheel if I don't have to and this must have been done before. Are there any implementations of a Sobel filter using OpenGL ES?

like image 752
giroy Avatar asked Dec 04 '22 04:12

giroy


1 Answers

If Objective-C is acceptable, you could look at my GPUImage framework and its GPUImageSobelEdgeDetectionFilter. This applies Sobel edge detection using OpenGL ES 2.0 fragment shaders. You can see the output from this in the "sketch" example in this answer.

If you don't want to dig into the Objective-C code, the critical work here is performed by two sets of shaders. In a first pass, I reduce the image to its luminance and store that value in the red, green, and blue channels. I do this using the following vertex shader:

 attribute vec4 position;
 attribute vec4 inputTextureCoordinate;

 varying vec2 textureCoordinate;

 void main()
 {
    gl_Position = position;
    textureCoordinate = inputTextureCoordinate.xy;
 }

and fragment shader:

 precision highp float;

 varying vec2 textureCoordinate;

 uniform sampler2D inputImageTexture;

 const highp vec3 W = vec3(0.2125, 0.7154, 0.0721);

 void main()
 {
     float luminance = dot(texture2D(inputImageTexture, textureCoordinate).rgb, W);

     gl_FragColor = vec4(vec3(luminance), 1.0);
 }

After that, I actually perform the Sobel edge detection (with lighter pixels being edges in this case) using this vertex shader:

 attribute vec4 position;
 attribute vec4 inputTextureCoordinate;

 uniform highp float imageWidthFactor; 
 uniform highp float imageHeightFactor; 

 varying vec2 textureCoordinate;
 varying vec2 leftTextureCoordinate;
 varying vec2 rightTextureCoordinate;

 varying vec2 topTextureCoordinate;
 varying vec2 topLeftTextureCoordinate;
 varying vec2 topRightTextureCoordinate;

 varying vec2 bottomTextureCoordinate;
 varying vec2 bottomLeftTextureCoordinate;
 varying vec2 bottomRightTextureCoordinate;

 void main()
 {
     gl_Position = position;

     vec2 widthStep = vec2(imageWidthFactor, 0.0);
     vec2 heightStep = vec2(0.0, imageHeightFactor);
     vec2 widthHeightStep = vec2(imageWidthFactor, imageHeightFactor);
     vec2 widthNegativeHeightStep = vec2(imageWidthFactor, -imageHeightFactor);

     textureCoordinate = inputTextureCoordinate.xy;
     leftTextureCoordinate = inputTextureCoordinate.xy - widthStep;
     rightTextureCoordinate = inputTextureCoordinate.xy + widthStep;

     topTextureCoordinate = inputTextureCoordinate.xy + heightStep;
     topLeftTextureCoordinate = inputTextureCoordinate.xy - widthNegativeHeightStep;
     topRightTextureCoordinate = inputTextureCoordinate.xy + widthHeightStep;

     bottomTextureCoordinate = inputTextureCoordinate.xy - heightStep;
     bottomLeftTextureCoordinate = inputTextureCoordinate.xy - widthHeightStep;
     bottomRightTextureCoordinate = inputTextureCoordinate.xy + widthNegativeHeightStep;
 }

and this fragment shader:

 precision highp float;

 varying vec2 textureCoordinate;
 varying vec2 leftTextureCoordinate;
 varying vec2 rightTextureCoordinate;

 varying vec2 topTextureCoordinate;
 varying vec2 topLeftTextureCoordinate;
 varying vec2 topRightTextureCoordinate;

 varying vec2 bottomTextureCoordinate;
 varying vec2 bottomLeftTextureCoordinate;
 varying vec2 bottomRightTextureCoordinate;

 uniform sampler2D inputImageTexture;

 void main()
 {
    float i00   = texture2D(inputImageTexture, textureCoordinate).r;
    float im1m1 = texture2D(inputImageTexture, bottomLeftTextureCoordinate).r;
    float ip1p1 = texture2D(inputImageTexture, topRightTextureCoordinate).r;
    float im1p1 = texture2D(inputImageTexture, topLeftTextureCoordinate).r;
    float ip1m1 = texture2D(inputImageTexture, bottomRightTextureCoordinate).r;
    float im10 = texture2D(inputImageTexture, leftTextureCoordinate).r;
    float ip10 = texture2D(inputImageTexture, rightTextureCoordinate).r;
    float i0m1 = texture2D(inputImageTexture, bottomTextureCoordinate).r;
    float i0p1 = texture2D(inputImageTexture, topTextureCoordinate).r;
    float h = -im1p1 - 2.0 * i0p1 - ip1p1 + im1m1 + 2.0 * i0m1 + ip1m1;
    float v = -im1m1 - 2.0 * im10 - im1p1 + ip1m1 + 2.0 * ip10 + ip1p1;

    float mag = length(vec2(h, v));

    gl_FragColor = vec4(vec3(mag), 1.0);
 }

The imageWidthFactor and imageHeightFactor are merely reciprocals of the input image size in pixels.

You might notice that this two-pass approach is more complex than the one in the above-linked answer. That's because the original implementation wasn't the most efficient when running on mobile GPUs (at least those of the PowerVR variety in iOS devices). By removing all dependent texture reads and precalculating the luminance so that I only have to sample from the red channel in the final shader, this tuned edge detection method is 20X faster in my benchmarks than the naive one that does all this in one pass.

like image 131
Brad Larson Avatar answered Dec 14 '22 13:12

Brad Larson