Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Efficient neighbourhood texture access for convolution in GLSL ES 1.1

I'm performing a convolution with a 3x3 kernel in an iPhone shader, GLSL ES 1.1. Currently I am just doing 9 texture lookups. Is there a faster way? Some ideas:

  • passing the input image as a buffer rather than a texture to avoid invoking texture interpolation.

  • Passing 9 varying vec2 coordinates from the vertex shader (rather than just one as I am currently doing) to encourage the processor to prefetch the texture efficiently.

  • Looking into various Apple extensions that might be appropriate for this.

  • (Added) investigate ES equivalents for the GLSL shaderOffset call (which is not available under ES but perhaps there is an equivalent)

In terms of hardware, I'm focussed in particular on the iPhone 4S.

like image 220
Alex Flint Avatar asked Jun 21 '12 15:06

Alex Flint


1 Answers

Are you sure you don't mean OpenGL ES 2.0? You can't do shaders of any kind using OpenGL ES 1.1. I'll assume the former.

In my experience, the fastest way I've found to do this is your second listed item. I do several types of 3x3 convolutions in my GPUImage framework (which you could just use instead of trying to roll your own) and for those I feed in the texture offset for the horizontal and vertical directions and calculate the nine texture coordinates needed within the vertex shader. From there, I pass those as varyings to the fragment shader.

This (for the most part) avoids dependent texture reads in the fragment shader, which are terribly expensive on the iOS PowerVR GPUs. I say "for the most part" because on older devices like the iPhone 4, only eight of those varyings are used to avoid a dependent texture read. As I learned this last week, the ninth triggers a dependent texture read on older devices, so that slows things down a bit. The iPhone 4S, however, doesn't have this issue because it supports a greater number of varyings being used in this fashion.

I use the following for my vertex shader:

 attribute vec4 position;
 attribute vec4 inputTextureCoordinate;

 uniform highp float texelWidth; 
 uniform highp float texelHeight; 

 varying vec2 textureCoordinate;
 varying vec2 leftTextureCoordinate;
 varying vec2 rightTextureCoordinate;

 varying vec2 topTextureCoordinate;
 varying vec2 topLeftTextureCoordinate;
 varying vec2 topRightTextureCoordinate;

 varying vec2 bottomTextureCoordinate;
 varying vec2 bottomLeftTextureCoordinate;
 varying vec2 bottomRightTextureCoordinate;

 void main()
 {
     gl_Position = position;

     vec2 widthStep = vec2(texelWidth, 0.0);
     vec2 heightStep = vec2(0.0, texelHeight);
     vec2 widthHeightStep = vec2(texelWidth, texelHeight);
     vec2 widthNegativeHeightStep = vec2(texelWidth, -texelHeight);

     textureCoordinate = inputTextureCoordinate.xy;
     leftTextureCoordinate = inputTextureCoordinate.xy - widthStep;
     rightTextureCoordinate = inputTextureCoordinate.xy + widthStep;

     topTextureCoordinate = inputTextureCoordinate.xy - heightStep;
     topLeftTextureCoordinate = inputTextureCoordinate.xy - widthHeightStep;
     topRightTextureCoordinate = inputTextureCoordinate.xy + widthNegativeHeightStep;

     bottomTextureCoordinate = inputTextureCoordinate.xy + heightStep;
     bottomLeftTextureCoordinate = inputTextureCoordinate.xy - widthNegativeHeightStep;
     bottomRightTextureCoordinate = inputTextureCoordinate.xy + widthHeightStep;
 }

and fragment shader:

 precision highp float;

 uniform sampler2D inputImageTexture;

 uniform mediump mat3 convolutionMatrix;

 varying vec2 textureCoordinate;
 varying vec2 leftTextureCoordinate;
 varying vec2 rightTextureCoordinate;

 varying vec2 topTextureCoordinate;
 varying vec2 topLeftTextureCoordinate;
 varying vec2 topRightTextureCoordinate;

 varying vec2 bottomTextureCoordinate;
 varying vec2 bottomLeftTextureCoordinate;
 varying vec2 bottomRightTextureCoordinate;

 void main()
 {
     mediump vec4 bottomColor = texture2D(inputImageTexture, bottomTextureCoordinate);
     mediump vec4 bottomLeftColor = texture2D(inputImageTexture, bottomLeftTextureCoordinate);
     mediump vec4 bottomRightColor = texture2D(inputImageTexture, bottomRightTextureCoordinate);
     mediump vec4 centerColor = texture2D(inputImageTexture, textureCoordinate);
     mediump vec4 leftColor = texture2D(inputImageTexture, leftTextureCoordinate);
     mediump vec4 rightColor = texture2D(inputImageTexture, rightTextureCoordinate);
     mediump vec4 topColor = texture2D(inputImageTexture, topTextureCoordinate);
     mediump vec4 topRightColor = texture2D(inputImageTexture, topRightTextureCoordinate);
     mediump vec4 topLeftColor = texture2D(inputImageTexture, topLeftTextureCoordinate);

     mediump vec4 resultColor = topLeftColor * convolutionMatrix[0][0] + topColor * convolutionMatrix[0][1] + topRightColor * convolutionMatrix[0][2];
     resultColor += leftColor * convolutionMatrix[1][0] + centerColor * convolutionMatrix[1][1] + rightColor * convolutionMatrix[1][2];
     resultColor += bottomLeftColor * convolutionMatrix[2][0] + bottomColor * convolutionMatrix[2][1] + bottomRightColor * convolutionMatrix[2][2];

     gl_FragColor = resultColor;
 }

Even with the above caveats, this shader runs in ~2 ms for a 640x480 frame of video on an iPhone 4, and a 4S can handle 1080p video at 30 FPS easily with a shader like this.

like image 199
Brad Larson Avatar answered Oct 15 '22 17:10

Brad Larson