This minimal Metal shader pair renders a simple interpolated gradient onto the screen (when provided with a vertex quad/triangle) based on the vertices' color attributes:
#include <metal_stdlib>
using namespace metal;
typedef struct {
float4 position [[position]];
float4 color;
} vertex_t;
vertex vertex_t vertex_function(const device vertex_t *vertices [[buffer(0)]], uint vid [[vertex_id]]) {
return vertices[vid];
}
fragment half4 fragment_function(vertex_t interpolated [[stage_in]]) {
return half4(interpolated.color);
}
…with the following vertices:
{
// x, y, z, w, r, g, b, a
1.0, -1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0,
-1.0, -1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0,
-1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0,
1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0,
1.0, -1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0,
-1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0
}
So far so good. It renders the well-known gradient triangle/quad.
The one you find in pretty much every single GPU HelloWorld tutorial.
I however need to have a fragment shader that instead of taking the interpolated vertex color computes a color based on the fragments position on screen. It receives a screen-filling quad of vertices and then uses only the fragment shader to calculate the actual colors.
From my understanding the position of a vertex is a float4
with the first three elements being the 3d vector and the 4th element set to 1.0
.
So—I thought—it should be easy to modify the above to have it simply reinterpret the vertex' position as a color in the fragment shader, right?
#include <metal_stdlib>
using namespace metal;
typedef struct {
float4 position [[position]];
} vertex_t;
vertex vertex_t vertex_function(const device vertex_t *vertices [[buffer(0)]], uint vid [[vertex_id]]) {
return vertices[vid];
}
fragment half4 fragment_function(vertex_t interpolated [[stage_in]]) {
float4 color = interpolated.position;
color += 1.0; // move from range -1..1 to 0..2
color *= 0.5; // scale from range 0..2 to 0..1
return half4(color);
}
…with the following vertices:
{
// x, y, z, w,
1.0, -1.0, 0.0, 1.0,
-1.0, -1.0, 0.0, 1.0,
-1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,
1.0, -1.0, 0.0, 1.0,
-1.0, 1.0, 0.0, 1.0,
}
I was quite surprised however to find a uniformly colored (yellow) screen being rendered, instead of a gradient going from red=0.0
to red=1.0
in x-axis and green=0.0
to green=1.0
in x-axis:
(expected render output vs. actual render output)
The interpolated.position
appears to be yielding the same value for each fragment.
What am I doing wrong here?
Ps: (While this dummy fragment logic could have easily been accomplished using vertex interpolation, my actual fragment logic cannot.)
A fragment shader is the same as pixel shader.
A Fragment Shader is the Shader stage that will process a Fragment generated by the Rasterization into a set of colors and a single depth value. The fragment shader is the OpenGL pipeline stage after a primitive is rasterized. For each sample of the pixels covered by a primitive, a "fragment" is generated.
In this case, your fragment shader runs once for every fragment within that quad. For this program, that means 100 * 100 = 10000 times, once for every pixel on your screen. However, not every fragment rendered in the shader has to be displayed on the screen.
All shader stages can pass data between them by using input and output variables. If in the vertex shader we create an output variable, we will be able to read it on the fragment shader as an input variable.
The interpolated.position appears to be yielding the same value for each fragment.
No, the values are just very large. The variable with the [[position]] qualifier, in the fragment shader, is in pixel coordinates. Divide by the render target dimensions, and you'll see what you want, except for having to invert the green value, because Metal's convention is to define the upper-left as the origin for this, not the bottom-left.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With