I'm looking at this example code from the WebGL2 library PicoGL.js.
It describes a single triangle (three vertices: (-0.5, -0.5), (0.5, -0.5), (0.0, 0.5)
), each of which is assigned a color (red, green, blue) by the vertex shader:
#version 300 es
layout(location=0) in vec4 position;
layout(location=1) in vec3 color;
out vec3 vColor;
void main() {
vColor = color;
gl_Position = position;
}
The vColor
output is passed to the fragment shader:
#version 300 es
precision highp float;
in vec3 vColor;
out vec4 fragColor;
void main() {
fragColor = vec4(vColor, 1.0);
}
and together they render the following image:
My understanding is that the vertex shader is called once per vertex, whereas the fragment shader is called once per pixel.
However, the fragment shader references the vColor
variable, which is only assigned once per call to each vertex, but there are many more pixels than vertices!
The resulting image clearly shows a color gradient - why?
Does WebGL automatically interpolate values of vColor
for pixels in between vertices? If so, how is the interpolation done?
Yes, WebGL automatically interpolates between the values supplied to the 3 vertices.
Copied from this site
A linear interpolation from one value to another would be this formula
result = (1 - t) * a + t * b
Where
t
is a value from 0 to 1 representing some position betweena
andb
. 0 ata
and 1 atb
.For varyings though WebGL uses this formula
result = (1 - t) * a / aW + t * b / bW ----------------------------- (1 - t) / aW + t / bW
Where
aW
is theW
that was set ongl_Position.w
when the varying was as set toa
andbW
is theW
that was set ongl_Position.w
when the varying was set tob
.
The site linked above shows how that formula generates perspective correct texture mapping coordinates when interpolating varyings
It also shows an animation of the varyings changing
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With