I'm attempting to use shaders to modify a texture that is bound to a framebuffer, but I'm confused as to how the shaders get the "original" input values.
I'm doing the following:
GLuint textureId = 0;
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexImage2D(GL_TEXTURE_2D, ...);
GLuint framebufferId = 0;
glGenFramebuffers(1, &framebufferId);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferId);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
glBindTexture(GL_TEXTURE_2D, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) { ... }
glUseProgram(programId);
const GLenum buffer = GL_COLOR_ATTACHMENT0;
glDrawBuffers(1, &buffer);
What would empty vertex and fragment shaders look like? As I'm not drawing primatives, how do I set gl_Position
in the vertex shader? What about passing through the input colour as the output colour of the fragment shader?
Empty vertex shader:
#version 330
void main()
{
gl_Position = ??;
}
Empty fragment shader:
#version 330
layout(location = 0) out vec4 out_colour;
void main()
{
out_colour = ???;
}
All shader stages can pass data between them by using input and output variables. If in the vertex shader we create an output variable, we will be able to read it on the fragment shader as an input variable.
The most common way to read matrix in the vertex shader is by using textures. Although textures are most commonly used to store color information (albedo, roughness map, specular map, tint map, etc.) textures can also be used to store any kind of information (e.g. normal map).
The difference between vertex and fragment shaders is the process developed in the render pipeline. Vertex shaders could be define as the shader programs that modifies the geometry of the scene and made the 3D projection. Fragment shaders are related to the render window and define the color for each pixel.
Data is passed from shader to shader by using the in and out keywords. You create an output shader variable by using the out keyword. The out variable in one shader provides the input data to the next shader declared as an in variable. The only condition is that both of these variables must have the same name.
I was under the impression that you could render to an offscreen framebuffer, with an attached texture, then use shaders to modify the texture, then use glReadPixels to get the modified data back. This is what I'm trying to do.
Ah ok, so you want to feed a texture through a fragment shader to gain a new texture. First of all you have to keep in mind, that you cannot just modify a texture in-place, since you cannot read from the texture you're currently rendering to. You have to feed in the to be modified texture into the fragment shader as an ordinary texture and put out the result into the framebuffer as usual, which could be an FBO with a different texture attached, a renderbuffer (if you want to read it back to the CPU, anyway), or the default framebuffer. You don't need an FBO if you just want to transform one image into another one, only if you want the result to be written into an offscreen buffer or a texture.
Furthermore you still have to draw something in order for the rasterizer to generate actual fragments to invoke the fragment shader for. The usual way to do this is to just draw a screen-sized quad parallel to the viewing plane, in order to fill the complete viewport with fragments:
//initialization code
glGenVertexArrays(1, &quad_vao);
glBindVertexArray(quad_vao);
const GLfloat vertices[] = {
-1.0f, 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 1.0f, -1.0f };
glGenBuffers(1, &quad_vbo);
glBindBuffer(GL_ARRAY_BUFFER, quad_vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, nullptr);
glEnableVertexAttribArray(0);
glBindVertexArray(0);
glDeleteBuffers(1, &quad_vbo);
...
//render code
glBindVertexArray(quad_vao);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
As vertex shader a simple pass-thru shader is enough since the vertex positions already are in clip space:
#version 330
layout(location = 0) in vec4 in_position;
void main()
{
gl_Position = in_position;
}
In the fragment shader we take the texture as input. The texture coordinate is already given by the fragment's position on the screen, we just need to normalize it by dividing through the texture size (or maybe use a GL_TEXTURE_RECTANGLE
and a corresponsing samplerRect
to use the fragment coordinate directly):
#version 330
uniform sampler2D tex;
uniform vec2 tex_size;
layout(location = 0) out vec4 out_color;
void main()
{
vec4 in_color = texture(tex, gl_FragCoord.xy / tex_size);
out_color = //do whatever you want with in_color;
}
That's all, the modified texture is written to the framebuffer, no matter where that redirects or what you do with the framebuffer data afterwards.
EDIT: With OpenGL 4.3 and its compute shaders there is now a more direct way for such rather non-rasterization pure GPGPU tasks like image processing. You can just invoke a compute shader (which is more similar to other GPU computing frameworks, like CUDA or OpenCL, than the other OpenGL shaders) on a regular 2D domain and process a texture (using OpenGL 4.2's image load/store functionality) directly in-place. In this case all you need is the corresponding compute shader:
#version 430
layout(local_size_x=32,local_size_y=8) in; //or whatever fits hardware and shader
layout(binding = 0, rgba) uniform image2D img; //adjust format to the actual data
void main()
{
const uint2 idx = gl_GlobalInvocationID.xy;
vec4 color = imageLoad(img, idx);
//do whatever you want with color
imageStore(img, idx, color);
}
Then all you need to do is bind the texture to the corresponding image unit (0, as set in the shader) and invoke a compute shader over the 2-dimensional image domain:
//again use the format that fits the texture data
glBindImageTexture(0, textureId, 0, GL_FALSE, 0, GL_READ_WRITE, GL_RGBA8);
glUseProgram(compute_program); //a program with a single GL_COMPUTE_SHADER
glDispatchCompute(texture_width, texture_height, 1);
And that's all, you don't need an FBO, you don't need any other shaders, you don't need to draw anything, just raw computation. But it has to be evaluated if this more direct approach also results in better performance. And likewise might you need to pay some attention to proper memory synchronization of the to be modified texture, especially when trying to read from it afterwards. But consult deeper materials on image load/store for further information.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With