Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenGL ES Shader to outline 2D images

I'm working on an 2D iOS game using OpenGL 2.0 and I'm wondering if it's possible to write a shader that will outline images with a glow. All the images are 2D sprites. The shader examples I've seen for outlining are for 3D objects, so I'm not sure if it possible for 2D images.

like image 588
Roger Gilbrat Avatar asked Jan 25 '12 07:01

Roger Gilbrat


People also ask

Is there a way to make an outline using a shader?

Here is a shader that does an outline in the simplest way that I know of. It just uses the dot product of the normal and light to cutoff colors based on the normal angle. Really, I suppose the view direction should be used instead of the light position but this gives control over the direction of the edge highlighting if you want that.

Is there an OpenGL ES precision hint for iOS?

The official OpenGL ES Programming Guide for iOS has a small section dedicated to precision hints which you can refer to afterwards for optimization purposes, along with the iOS Device Compatibility Reference. The magic behind pixel shaders lies within gl_FragCoord.

How do you draw objects with normal shaders?

Just draw your object twice. The first time it is with a single colored shader which also extrudes vertices along the normal vector, and front face culling (keep the back faces). The second time you draw it, use the normal shaders.

What does a green screen mean in OpenGL ES?

For the duration of this tutorial, a full green screen means your base shaders ( RWTBase.vsh and RWTBase.fsh) are in working order and your OpenGL ES code is set up properly. Throughout this tutorial, green means “Go” and red means “Stop”.


1 Answers

Would you accept an edge detection filter (such as Sobel), producing an image like that shown in the Wikipedia article, followed by a Gaussian blur on the results of that to soften the edges and give it more of a glow, then composite that image onto your scene?

In practice you could probably just reuse the 3d outlining shaders you've seen — although you could in theory inspect depth quantities (with some extended effort in ES), every one I've ever seen was just a 2d effect on the rendered image.

EDIT: on further consideration, the Laplacian may be a little easier to apply than the Sobel because it can be done as a simple convolution shaders (as described in places like this). Though to be safe on mobile you possibly want to stick to 3x3 kernels at most and write a different shader for each effect rather than doing it with data. So e.g. a rough Gaussian blur, written out at length:

void main()
{
    mediump vec4 total = vec4(0.0);
    mediump vec4 grabPixel;

    total +=        texture2D(tex2D, texCoordVarying + vec2(-1.0 / width, -1.0 / height));
    total +=        texture2D(tex2D, texCoordVarying + vec2(1.0 / width, -1.0 / height));
    total +=        texture2D(tex2D, texCoordVarying + vec2(1.0 / width, 1.0 / height));
    total +=        texture2D(tex2D, texCoordVarying + vec2(-1.0 / width, 1.0 / height));

    grabPixel =     texture2D(tex2D, texCoordVarying + vec2(0.0, -1.0 / height));
    total += grabPixel * 2.0;

    grabPixel =     texture2D(tex2D, texCoordVarying + vec2(0.0, 1.0 / height));
    total += grabPixel * 2.0;

    grabPixel =     texture2D(tex2D, texCoordVarying + vec2(-1.0 / width, 0.0));
    total += grabPixel * 2.0;

    grabPixel =     texture2D(tex2D, texCoordVarying + vec2(1.0 / width, 0.0));
    total += grabPixel * 2.0;

    grabPixel = texture2D(tex2D, texCoordVarying);
    total += grabPixel * 4.0;

    total *= 1.0 / 16.0;

    gl_FragColor = total;
}

And a Laplacian edge detect ends up looking similar but with different constants.

As an optimisation, you should work out your relative sampling points in the vertex shader rather than in the fragment shader as far as possible given the limit on varyings, as doing so will avoid dependent texture reads.

like image 106
Tommy Avatar answered Jan 31 '23 13:01

Tommy