Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenGL: Projecting view-space co-ordinates to NDCs, results seemingly outside of [-1,1] range

I've been attempting to implement screen-space ambient occlusion as instructed by this tutorial. I've been tackling problems with my implementation as I've come across them, but this one has me stumped at the moment.

My understanding of the method is as follows. The ambient occlusion factor is determined by samples taken from within a hemisphere aligned to the normal of a given fragment. To determine whether a sample contributes towards the ambient occlusion factor, I must check the sample's depth in view space against a view space depth texture (included in the bottom-left corner of this post's images). So I know which co-ordinates to fetch from the depth texture, I must convert the sample's co-ordinates from view-space to normalized device co-ordinates (in the range [-1,1]) and then to the range [0,1], so the depth texture effectively 'maps' to the viewport.

The following image is of my ambient occlusion laid over my scene. I'm aware I've got a fairly obvious issue with the ambient occlusion itself (I assume the hemispheres are orientated incorrectly) which I'll deal with in time, but what's piquing my curiosity currently is the appearance of the occlusion being 'downscaled', suggesting that my operation to move from view-space sample co-ordinates to texture co-ordinates is incorrect.

enter image description here

As I'm lacking a stable shader debugger, the debugging I can do is limited to what I can render to the screen. The next image is created with the following code, with ndcs being the normalized-device co-ordinates for a given sample.:

if (ndcs.x > 1.0f || ndcs.y > 1.0f || ndcs.x < -1.0f || ndcs.y < -1.0f)
{
  gl_FragColor = vec4(1.0f, 0.0f, 0.0f, 1.0f);
}
else
{
  gl_FragColor = vec4(vec3(1.0f), 1.0f);
}

enter image description here

I would expect the image to be entirely white (or rather, the bits I'm using this shader for), however it seems to suggest that the NDCs I'm creating are outside of the [-1,1] range, which I believe must be incorrect. It's not a consistent area of the screen either, as you can see in the following image where the camera is very near a surface:

enter image description here

I've never used this procedure to get NDCs before, so I'm sure my logic must be wrong somewhere. I've downloaded the demo code supplied with the tutorial and I can't see where my code differs. I've also searched online (including on this very site) and I don't seem to be able to find anyone with quite the same symptoms as myself.

Here's the relevant code from my shaders:

Vert Shader:

v_eye_space_position = u_mvpMatrix * a_position;
v_world_space_normal = normalize(u_rotationMatrix * a_normal);
v_eye_space_normal = normalize(u_mvpMatrix * a_normal);
gl_Position = v_eye_space_position;

Frag Shader:

// --- SSAO Testing ---
// Convert from the noise texture back to [-1,1] range
// We want the noise texture to tile across the screen.
vec3 kernel_rotation = (texture2D(s_noise, gl_FragCoord.xy * u_noise_scale) * 2.0f - 1.0f).xyz;
vec3 eye_space_tangent = normalize(kernel_rotation - v_eye_space_normal.xyz * dot(kernel_rotation, v_eye_space_normal.xyz));
vec3 eye_space_bitangent = cross(v_eye_space_normal.xyz, eye_space_tangent);
mat3 tbn = mat3(eye_space_tangent, eye_space_bitangent, v_eye_space_normal);

float ambient_occlusion = 0.0f;
const float hemisphere_radius = 0.05f;

for (int i=0; i<16; i++)
{
  vec3 kernel_sample = tbn * u_ssao_kernel[i];
  kernel_sample = kernel_sample * hemisphere_radius + v_eye_space_position.xyz;

  // Project the sample position into screen space.
  vec4 offset = vec4(kernel_sample, 1.0f);
  offset = u_projection_matrix * offset;
  offset.xy /= offset.w;
  vec4 ndcs = offset;
  offset.xy = 1.0f - (offset.xy * 0.5f + 0.5f);

  // Find the depth at this sample point.
  float sample_depth = texture2D(s_camera_depth, offset.xy).z;

  // Check if the sample point is occluded.
  float range_check = 0.0f;

  float linear_eye_space_position = (v_eye_space_position.z - u_near_plane)/(u_far_plane - u_near_plane);

  // Range check.
  if (abs(linear_eye_space_position - sample_depth) < hemisphere_radius)
  {
    range_check = 1.0f;
  }

  float linear_kernel_sample_depth = (kernel_sample.z - u_near_plane)/(u_far_plane - u_near_plane);
  if (sample_depth <= linear_kernel_sample_depth)
  {
    ambient_occlusion += 1.0f * range_check;
  }
}

// Average and invert the ambient occlusion.
ambient_occlusion = 1.0f - (ambient_occlusion/16.0f);

I've looked at each element in isolation and I can't see the problem with them.

  • I've susbstituted the fragment's own view-space position into the projection (instead of the sample's view-space position) and I get the same results.
  • The projection matrix is the perspective matrix I use to transform the model's vertices (I've constructed the MVP matrix in my vertex shader to ensure the projection matrix is reaching the shader program in tact and it is).
  • The projection operation itself - it's not something I've done before, but I've read articles online and questions from people with projection problems and I can't see what I'm doing wrong.

As such, I can only conclude that there must be something fundamental about my understanding of perspective projection which is flawed, but I just can't figure out what. If any of you guys could shed some light on the problem or further avenues for me to check, I'd be most grateful. If there's any useful information I've omitted or anything I can clarify, please just let me know.

like image 917
Richard Williams Avatar asked Oct 31 '22 16:10

Richard Williams


1 Answers

From your vertex shader:

v_eye_space_position = u_mvpMatrix * a_position;
[...]

pMatrix * a_normal); gl_Position = v_eye_space_position;

From this, we can see that v_eye_space_position is not the eye space position of the vertex but the clip space position, which also needs to be assigned to gl_Position. The neame of your matrix uniform also suggests that this is the ModelViewProjection-Matrix.

In your fragment shader, you basically mutliply that result by the projection matrix again (since you seem to assume that it is in eye space).

So the correct code would be:

v_eye_space_position = u_mvMatrix * a_position;
[...]
gl_Position = u_projection_matrix * v_eye_space_position;

Now, you can apply the projection to v_eye_space_position again in the fragment shader. But my question is: why doing this again? If you want to work in screen space, gl_FragCoord is already in window space. You only need a multpliy-add to get from window space to NDC, just by inverting the viewport (and depth range) transform.

like image 199
derhass Avatar answered Nov 17 '22 20:11

derhass