I got a question related to the new compute shaders. I am currently working on a particle system. I store all my particles in shader-storage-buffer to access them in the compute shader. Then I dispatch an one dimensional work group.
#define WORK_GROUP_SIZE 128
_shaderManager->useProgram("computeProg");
glDispatchCompute((_numParticles/WORK_GROUP_SIZE), 1, 1);
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
My compute shader:
#version 430
struct particle{
vec4 currentPos;
vec4 oldPos;
};
layout(std430, binding=0) buffer particles{
struct particle p[];
};
layout (local_size_x = 128, local_size_y = 1, local_size_z = 1) in;
void main(){
uint gid = gl_GlobalInvocationID.x;
p[gid].currentPos.x += 100;
}
But somehow not all particles are affected. I am doing it the same way it was done in this example, but it doesn't work. http://education.siggraph.org/media/conference/S2012_Materials/ComputeShader_6pp.pdf
Edit:
After I called glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT) I go on like this:
_shaderManager->useProgram("shaderProg");
glBindBuffer(GL_ARRAY_BUFFER, shaderStorageBufferID);
glVertexPointer(4,GL_FLOAT,sizeof(glm::vec4), (void*)0);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_POINTS, 0, _numParticles);
glDisableClientState(GL_VERTEX_ARRAY);
So which bit would be appropriate to use in this case?
You have your barriers on backwards. It's a common problem.
The bits you give to the barrier describe how you intend to use the data written, not how the data was written. GL_SHADER_STORAGE_BARRIER_BIT
would only be appropriate if you had some process that wrote to a buffer object via image load/store (or a storage buffer/atomic counters), then used a storage buffer to read that buffer object data.
Since you're reading the buffer as a vertex attribute array buffer, you should use the cleverly titled, GL_VERTEX_ATTRIB_ARRAY_BARRIER_BIT
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With