Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenGL GLSL SSAO Implementation

Tags:

opengl

glsl

ssao

I try to implement Screen Space Ambient Occlusion (SSAO) based on the R5 Demo found here: http://blog.nextrevision.com/?p=76

In Fact I try to adapt their SSAO - Linear shader to fit into my own little engine.

1) I calculate View Space surface normals and Linear depth values. I Store them in a RGBA texture using the following shader:

Vertex:

varNormalVS = normalize(vec3(vmtInvTranspMatrix * vertexNormal));
depth = (modelViewMatrix * vertexPosition).z;
depth = (-depth-nearPlane)/(farPlane-nearPlane);
gl_Position = pvmtMatrix * vertexPosition;

Fragment:

gl_FragColor = vec4(varNormalVS.x,varNormalVS.y,varNormalVS.z,depth)

For my linear depth calculation I referred to: http://www.gamerendering.com/2008/09/28/linear-depth-texture/

Is it correct? Texture seem to be correct, but maybe it is not?

enter image description here

2) The actual SSAO Implementation: As mentioned above the original can be found here: http://blog.nextrevision.com/?p=76

or faster: on pastebin http://pastebin.com/KaGEYexK

In contrast to the original I only use 2 input textures since one of my textures stores both, normals as RGB and Linear Depht als Alpha.

My second Texture, the random normal texture, looks like this: http://www.gamerendering.com/wp-content/uploads/noise.png

I use almost exactly the same implementation but my results are wrong.

Before going into detail I want to clear some questions first:

1) ssao shader uses projectionMatrix and it's inverse matrix.

Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong?

2) Having a combined normal and Depth texture instead of two seperate ones.

In my opinion this is the biggest difference between the R5 implementation and my implementation attempt. I think this should not be a big problem, however, due to different depth textures this is most likley to cause problems.

Please note that R5_clipRange looks like this

vec4 R5_clipRange = vec4(nearPlane, farPlane, nearPlane * farPlane, farPlane - nearPlane);

Original:

float GetDistance (in vec2 texCoord)
{
//return texture2D(R5_texture0, texCoord).r * R5_clipRange.w;
const vec4 bitSh = vec4(1.0 / 16777216.0, 1.0 / 65535.0, 1.0 / 256.0, 1.0);
return dot(texture2D(R5_texture0, texCoord), bitSh) * R5_clipRange.w;
}

I have to admit I do not understand the code snippet. My depth his stored in the alpha of my texture and I thought it should be enought to just do this

return texture2D(texSampler0, texCoord).a  * R5_clipRange.w;

Correct or Wrong?

like image 546
mp87 Avatar asked Jan 30 '12 10:01

mp87


People also ask

How is SSAO implemented?

The algorithm is implemented as a pixel shader, analyzing the scene depth buffer which is stored in a texture. For every pixel on the screen, the pixel shader samples the depth values around the current pixel and tries to compute the amount of occlusion from each of the sampled points.

What are SSAO textures?

SSAO stands for Screen Space Ambient Occlusion, which is the screen space technique for rendering a kind of shadowing that occurs when objects in close proximity block ambient lighting near each other.

Is ambient occlusion good?

You would want to use ambient occlusion because it shows subtle variations in lighting and helps your eyes detect surface details that would otherwise be washed out or unnoticeable. Ambient occlusion is great for softening the overall lighting in your scene if it's too bright.

Is ambient occlusion good for FPS?

The typically frame rate hit you can expect when enabling Ambient Occlusion is between 5-10%. There are exceptions though, such as Apex Legends. Poor implementation at launch mean Ambient Occlusion could cause you to lose up to 20% of your FPS in Apex Legends.


1 Answers

Your normal texture seems wrong. My guess is that your vmtInvTranspMatrix is a model-view matrix. However it should be model-view-projection matrix (note you need screen space normals, not view space normals). The depth calculation is correct.

I've implemented SSAO once and the normal texture looks like this (note there is no blue here):

screen space normal

1) ssao shader uses projectionMatrix and it's inverse matrix. Since it is a post processing effect rendered onto a screen aligned quad via orthographic projection, the projectionMatrix is the orthographic matrix. Correct or Wrong ?

If you mean the second pass where you are rendering a quad to compute the actual SSAO, yes. You can avoid the multiplication by the orthogonal projection matrix altogether. If you render screen quad with [x,y] dimensions ranging from -1 to 1, you can use really simple vertex shader:

const vec2 madd=vec2(0.5,0.5);

void main(void)
{
    gl_Position = vec4(in_Position, -1.0, 1.0);
    texcoord = in_Position.xy * madd + madd;
}

2) Having a combined normal and Depth texture instead of two seperate ones.

Nah, that won't cause problems. It's a common practice to do so.

like image 117
stativ Avatar answered Sep 23 '22 00:09

stativ