I'm trying to implement the paper "Single-Pass Wireframe Rendering", which seems pretty simple, but it's giving me what I'd expect as far as thick, dark values.
The paper didn't give the exact code to figure out the altitudes, so I did it as I thought fit. The code should project the three vertices into viewport space, get their "altitudes" and send them to the fragment shader.
The fragment shader determines the distance of the closest edge and generates an edgeIntensity. I'm not sure what I'm supposed to do with this value, but since it's supposed to scale between [0,1], I multiply the inverse against my outgoing color, but it's just very weak.
I had a few questions that I'm not sure are addressed in the papers. First, should the altitudes be calculated in 2D instead of 3D? Second, they site DirectX features, where DirectX has a different viewport-space z-range, correct? Does that matter? I'm premultiplying the outgoing altitude distances by the w-value of the viewport-space coordinates as they recommend to correct for perspective projection.
image trying to correct for perspective projection
no correction (not premultiplying by w-value)
The non-corrected image seems to have clear problems not correcting for the perspective on the more away-facing sides, but the perspective-corrected one has very weak values.
Can anyone see what's wrong with my code or how to go about debugging it from here?
my vertex code in GLSL...
float altitude(in vec3 a, in vec3 b, in vec3 c) { // for an ABC triangle
vec3 ba = a - b;
vec3 bc = c - b;
vec3 ba_onto_bc = dot(ba,bc) * bc;
return(length(ba - ba_onto_bc));
}
in vec3 vertex; // incoming vertex
in vec3 v2; // first neighbor (CCW)
in vec3 v3; // second neighbor (CCW)
in vec4 color;
in vec3 normal;
varying vec3 worldPos;
varying vec3 worldNormal;
varying vec3 altitudes;
uniform mat4 objToWorld;
uniform mat4 cameraPV;
uniform mat4 normalToWorld;
void main() {
worldPos = (objToWorld * vec4(vertex,1.0)).xyz;
worldNormal = (normalToWorld * vec4(normal,1.0)).xyz;
//worldNormal = normal;
gl_Position = cameraPV * objToWorld * vec4(vertex,1.0);
// also put the neighboring polygons in viewport space
vec4 vv1 = gl_Position;
vec4 vv2 = cameraPV * objToWorld * vec4(v2,1.0);
vec4 vv3 = cameraPV * objToWorld * vec4(v3,1.0);
altitudes = vec3(vv1.w * altitude(vv1.xyz,vv2.xyz,vv3.xyz),
vv2.w * altitude(vv2.xyz,vv3.xyz,vv1.xyz),
vv3.w * altitude(vv3.xyz,vv1.xyz,vv2.xyz));
gl_FrontColor = color;
}
and my fragment code...
varying vec3 worldPos;
varying vec3 worldNormal;
varying vec3 altitudes;
uniform vec3 cameraPos;
uniform vec3 lightDir;
uniform vec4 singleColor;
uniform float isSingleColor;
void main() {
// determine frag distance to closest edge
float d = min(min(altitudes.x, altitudes.y), altitudes.z);
float edgeIntensity = exp2(-2.0*d*d);
vec3 L = lightDir;
vec3 V = normalize(cameraPos - worldPos);
vec3 N = normalize(worldNormal);
vec3 H = normalize(L+V);
//vec4 color = singleColor;
vec4 color = isSingleColor*singleColor + (1.0-isSingleColor)*gl_Color;
//vec4 color = gl_Color;
float amb = 0.6;
vec4 ambient = color * amb;
vec4 diffuse = color * (1.0 - amb) * max(dot(L, N), 0.0);
vec4 specular = vec4(0.0);
gl_FragColor = (edgeIntensity * vec4(0.0)) + ((1.0-edgeIntensity) * vec4(ambient + diffuse + specular));
}
I have implemented swine's idea, and the result is perfect, here is my screenshot:
struct MYBUFFEREDVERTEX {
float x, y, z;
float nx, ny, nz;
float u, v;
float bx, by, bz;
};
const MYBUFFEREDVERTEX g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f, 0.0f,
};
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
vertex shader:
#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif
uniform mat4 u_mvp_matrix;
uniform vec3 u_light_direction;
attribute vec3 a_position;
attribute vec3 a_normal;
attribute vec2 a_texcoord;
attribute vec3 a_barycentric;
varying vec2 v_texcoord;
varying float v_light_intensity;
varying vec3 v_barycentric;
void main()
{
// Calculate vertex position in screen space
gl_Position = u_mvp_matrix * vec4(a_position, 1.0);
// calculate light intensity, range of 0.3 ~ 1.0
v_light_intensity = max(dot(u_light_direction, a_normal), 0.3);
// Pass texture coordinate to fragment shader
v_texcoord = a_texcoord;
// Pass bary centric to fragment shader
v_barycentric = a_barycentric;
}
fragment shader:
#ifdef GL_ES
// Set default precision to medium
precision mediump int;
precision mediump float;
#endif
uniform sampler2D u_texture;
varying vec2 v_texcoord;
varying float v_light_intensity;
varying vec3 v_barycentric;
void main()
{
float min_dist = min(min(v_barycentric.x, v_barycentric.y), v_barycentric.z);
float edgeIntensity = 1.0 - step(0.005, min_dist);
// Set diffuse color from texture
vec4 diffuse = texture2D(u_texture, v_texcoord) * vec4(vec3(v_light_intensity), 1.0);
gl_FragColor = edgeIntensity * vec4(0.0, 1.0, 1.0, 1.0) + (1.0 - edgeIntensity) * diffuse;
}
first, your function altitude() is flawed, ba_onto_bc is calculated wrong because bc is not unit length (either normalize bc, or divide ba_onto_bc by dot(bc, bc) which is length squared - you save calculating the square root).
The altitudes should be calculated in 2D if you want constant-thickness edges, or in 3D if you want perspective-correct edges.
It would be much easier just to use barycentric coordinates as a separate vertex attribute (ie. vertex 0 of the triangle would get (1 0 0), the second vertex (0 1 0) and the last vertex (0 0 1)). In fragment shader you would calculate minimum and use step() or smoothstep() to calculate edge-ness.
That would only require 1 attribute instead of current two, and it would also eliminate the need for calculating height in vertex shader (although it may be useful if you would like to prescale the barycentric coordinates so you have uniformly thick lines - but calculate it offline). It should also be working pretty much instantly so it would be a good starting point to get to the desired behavior.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With