I'm trying to sample a depth texture into a compute shader and to copy it into an other texture.
The problem is that I don't get correct values when I read from the depth texture:
I've tried to check if the initial values of the depth texture were correct (with GDebugger), and they are. So it's the imageLoad GLSL function that retrieve wrong values.
This is my GLSL Compute shader:
layout (binding=0, r32f) readonly uniform image2D depthBuffer;
layout (binding=1, rgba8) writeonly uniform image2D colorBuffer;
// we use 16 * 16 threads groups
layout (local_size_x = 16, local_size_y = 16) in;
void main()
{
ivec2 position = ivec2(gl_GlobalInvocationID.xy);
// Sampling from the depth texture
vec4 depthSample = imageLoad(depthBuffer, position);
// We linearize the depth value
float f = 1000.0;
float n = 0.1;
float z = (2 * n) / (f + n - depthSample.r * (f - n));
// even if i try to call memoryBarrier(), barrier() or memoryBarrierShared() here, i still have the same bug
// and finally, we try to create a grayscale image of the depth values
imageStore(colorBuffer, position, vec4(z, z, z, 1));
}
and this is how I'm creating the depth texture and the color texture:
// generate the deth texture
glGenTextures(1, &_depthTexture);
glBindTexture(GL_TEXTURE_2D, _depthTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, wDimensions.x, wDimensions.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
// generate the color texture
glGenTextures(1, &_colorTexture);
glBindTexture(GL_TEXTURE_2D, _colorTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, wDimensions.x, wDimensions.y, 0, GL_RGBA, GL_FLOAT, NULL);
I fill the depth texture with depth values (bind it to a frame buffer and render the scene) and then I call my compute shader this way:
_computeShader.use();
// try to synchronize with the previous pass
glMemoryBarrier(GL_ALL_BARRIER_BITS);
// even if i call glFinish() here, the result is the same
glBindImageTexture(0, _depthTexture, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
glBindImageTexture(1, _colorTexture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA8);
glDispatchCompute((wDimensions.x + WORK_GROUP_SIZE - 1) / WORK_GROUP_SIZE,
(wDimensions.y + WORK_GROUP_SIZE - 1) / WORK_GROUP_SIZE, 1); // we divide the compute into groups of 16 threads
// try to synchronize with the next pass
glMemoryBarrier(GL_ALL_BARRIER_BITS);
with:
Do you have any idea of why I don't get valid depth values?
EDIT:
This is what the color texture looks like when I render a sphere:
and it seems that glClear(GL_DEPTH_BUFFER_BIT) doesn't do anything: Even if I call it just before the glDispatchCompute() I still have the same image... How can this be possible?
Actually, i discovered that you cannot send a depth texture as an image to a compute shader, even with the readonly keyword.
So i've replaced:
glBindImageTexture(0, _depthTexture, 0, GL_FALSE, 0, GL_READ_ONLY, GL_R32F);
by:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _depthTexture);
and in my compute shader:
layout (binding=0, r32f) readonly uniform image2D depthBuffer;
by:
layout (binding = 0) uniform sampler2D depthBuffer;
and to sample it i just write:
ivec2 position = ivec2(gl_GlobalInvocationID.xy);
vec2 screenNormalized = vec2(position) / vec2(ctxSize); // ctxSize is the size of the depth and color textures
vec4 depthSample = texture2D(depthBuffer, screenNormalized);
and it works very well like this
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With