I see a lot of different fragment shaders,
#version 130 out vec4 flatColor; void main(void) { flatColor = vec4(0.0,1.0,0.0,0.5); }
And they all use a different variable for the "out color" (in this case flatColor
). So how does OpenGL know what you're trying to do?
I'm guessing this works because flatColor
is the only variable defined as out
, but you're allowed to add more out
variables aren't you? Or would that just crash?
Actually, as a test, I just ran this:
#version 330 in vec2 TexCoord0; uniform sampler2D TexSampler; out vec4 x; out vec4 y; void main() { y = texture2D(TexSampler, TexCoord0.xy); }
It worked fine whether I used x
or y
.
(answer: the compiler is optimizing out the unused var, so the remaining var is assigned to location 0)
Furthermore, we have a predefined gl_FragColor
. What's the difference, and why do people usually insist on using their own variables?
A Fragment Shader is the Shader stage that will process a Fragment generated by the Rasterization into a set of colors and a single depth value. The fragment shader is the OpenGL pipeline stage after a primitive is rasterized. For each sample of the pixels covered by a primitive, a "fragment" is generated.
Pixel Shader (Fragment Shader in OpenGL) - Calculates the colour of a pixel on the screen based on what the vertex shader passes in, bound textures and user-added data. This cannot read the current screen at all, just work out what colour/transparency that pixel should be for the current primitive.
The pixel shader is another programmable function that allows flexibility in shading an individual pixel. Whereas vertex shaders can be used to completely transform the shape of an object, pixel shaders are used to change the appearance of the pixels.
A fragment shader is the same as pixel shader. One main difference is that a vertex shader can manipulate the attributes of vertices. which are the corner points of your polygons. The fragment shader on the other hand takes care of how the pixels between the vertices look.
Furthermore, we have a predefined gl_FragColor.
Let's start with this. No, you don't have the predefined gl_FragColor
. That was removed from core OpenGL 3.1 and above. Unless you're using compatibility (in which case, your 3.30 shaders should say #version 330 compatibility
at the top), you should never use this.
Now, back to user-defined fragment shader outputs. But first, a quick analogy.
Remember how, in vertex shaders, you have inputs? And these inputs represent vertex attribute indices, the numbers you pass to glVertexAttribPointer
and glEnableVertexAttribArray
and so forth? You set up which input pulls from which attribute. In GLSL 3.30, you use this syntax:
layout(location = 2) in color;
This sets the color
vertex shader input to come from attribute location 2. Before 3.30 (or without ARB_explicit_attrib_location), you would have to either set this up explicitly with glBindAttrbLocation
before linking or query the program for the attribute index with glGetAttribLocation
. If you don't explicitly provide an attribute location, GLSL will assign a location arbitrarily (ie: in an implementation-defined manner).
Setting it in the shader is almost always the better option.
In any case, fragment shader outputs work almost exactly the same way. Fragment shaders can write to multiple output colors, which themselves get mapped to multiple buffers in the framebuffer. Therefore, you need to indicate which output goes to which fragment output color.
This process begins with the fragment output location value. It's set very similarly to vertex shader input locations:
layout(location = 1) out secColor;
There are also the API functions glBindFragDataLocation
and glGetFragDataLocation
, which are analogous to glBindAttribLocation
and glGetAttribLocation
.
If you don't do any explicit assignments, implementations usually will assign one of your output variables to location 0. However, the OpenGL standard does not require this behavior, so you should not depend on it either.
Now to be fair, your program should have failed to link when you used two outputs that didn't get different output locations. What probably happened was that your compiler optimized the one you didn't write to out, so it kinda forgot about it when it came time to check for linker errors.
I'd like to specify this for OpenGLES 3.1 which uses GLSL_ES_3.10 link:
§4.4.2
If there is only a single output [in the fragment shader], the location does not need to be specified, in which case it defaults to zero.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With