Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How exactly does deferred shading work in LWJGL?

I want to start a deferred shading project with GLSL , Java & openGl

1. How does a deferred rendering pipeline works, does it render the scene for each image? For example when I want to create a specular, blur and shadow texture, do I need to render the scene for each of these textures.

I've seen some code snippets and there where no multiple render loops.

2. What is a Geometry buffer and what does it do? Is it something like a storage for scene data that I can draw to a texture without rendering again?

like image 894
bitQUAKE Avatar asked Dec 09 '22 15:12

bitQUAKE


1 Answers

To add something more specific so you might get started. You need FBOs with multiple attachments and a way for your shader to write to multiple FBO attachments. Google glDrawBuffers. Your FBO attachments also needs to be textures so the information can be passed to a shader. The FBO attachments should be the same size as the screen you are rendering to. There are many ways to approach this. Here is one example.

You need two FBOs

Geometry Buffer

1. Diffuse (GL_RGBA)
2. Normal Buffer (GL_RGB16F)
3. Position Buffer (GL_RGB32F)
4. Depth Buffer

Note that 3) is a huge waste since we can use the the depth buffer and the projection to reconstruct the position. This is a lot cheaper. Having the position buffer to begin with is a good start at least. Attack one problem at a time.

The 2) normal buffer can also be compressed more.

Light Accumulation Buffer

1. Light Buffer (GL_RGBA)
2. Depth Buffer

The depth buffer attachment in this FBO should be the same attachment as in the geometry buffer. We might not use this depth buffer information in this example, but you will need it sooner or later. It will always contain the depth information from the first stage.

How do we render this stuff?

We start by rendering our scene with very simple shaders. The purpose of these are mainly to fill the geometry buffer. We simply draw all our geometry with a very simple shader filling up the geometry buffer. For simplicity I use 120 shaders and no texture mapping (all though that is super trivial to add).

Vertex Shader :

#version 120

varying vec3 normal;
varying vec4 position;

void main( void )
{
    normal = normalize(gl_NormalMatrix * gl_Normal);
    position = gl_ModelViewMatrix * gl_Vertex;
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}

Fragment Shader :

#version 120

uniform vec4 objectColor; // Color of the object you are drawing
varying vec3 normal;
varying vec4 position;

void main( void )
{
    // Use glDrawBuffers to configure multiple render targets
    gl_FragData[0] = objectColor; // Diffuse
    gl_FragData[1] = vec4(normalize(normals.xyz), 0.0); // normals
    gl_FragData[2] = vec4(position.xyz, 0.0); // Position
}

We have now for example drawn 20 objects to out geometry buffer with different color. If we look at the diffuse buffer, it's a pretty dull image with plain colors (or plain textures without lighting), but we still have the view position, normal and depth of each single fragment. This will be valuable information for is in the next stage when doing the lighting.

Light Accumulation

Now we switch to our light accumulation buffer and it is time to do some light magic. For each single light we are going to draw to our light accumulation buffer with additive blending enabled. How you do this is not that important for the result as long as you cover all the fragments affected by the light. You can do this initially by drawing a full screen quad, but that is very costly. We will only cover point lights, but this more than sufficient to cover the simple lighting principle (simple point lights are extremely trivial to make). A simple way is to draw a cube or a low poly sphere (light volume) at the light position scaled by the light radius. This makes rendering tons of small lights way more efficient.. but don't worry about performance now. Fullscreen quad will do the trick just fine.

Now, the simple principle is :

  • Each fragment has a stored x,y,z position we simply get with a texture fetch
  • We pass in the position of the light
  • We pass in the radius of the light
  • We can know if the fragment is affected by the light simply by measuring the distance from the value in the position buffer and the light position
  • From there on it's pretty standard light calculations

Fragment shader : (This shader works for anything. Light volumes, full screen quads.. whatever) #version 120

uniform sampler2D diffuseBuffer;
uniform sampler2D positionBuffer;
uniform sampler2D normalBuffer;

uniform float lightRadius; // Radius of our point light
uniform vec3 lightPos; // Position of our point light
uniform vec4 lightColor; // Color of our light
uniform vec2 screensize; // screen resolution

void main()
{
    // VU for the current fragment
    vec2 uv = vec2(gl_FragCoord.x / screensize.x, gl_FragCoord.y / screensize.y);
    // Read data from our gbuffer (sent in as textures)
    vec4 diffuse_g = texture2D(diffuseBuffer, uv);
    vec4 position_g = texture2D(positionBuffer, uv);
    vec4 gnormal_g = texture2D(normalBuffer, uv);

    // Distance from the light center and the current pixel
    float distance = length(lightPos - position_g.xyz);

    // If the fragment is NOT affecter by the light we discard it!
    // PS : Don't kill me for using discard. This is for simplicity.
    if(distance > lightRadius) discard;

    // Calculate the intensity value this light will affect the fragment (Standard light stuff!)
    ... Use lightPos and position_g to calculate the light normal ..
    ... Do standard dot product of light normal and normal_g ...
    ... Just standard light stuff ...

    // Super simple attenuation placeholder
    float attenuation = 1.0 - (distance / lightRadius);

    gl_FragColor = diffuse_g * lightColor * attenuation * <multiplier from light calculation>;
}

We repeat this for each light. The order the lights are rendered doesn't matter since the result will always be the same with additive blending. You can also do it much simpler by accumulating only light intensity. In theory you should already have the final lit result in the light accumulation buffer, but you might want to to additional adjustments.

Combine

You might want to adjust a few things. Ambient? Color correction? Fog? Other post processing stuff. You can combine the light accumulation buffer and the diffuse buffer with some adjustments. We kind of already did that in the light stage, but if you only saved light intensity, you will have to do a simple diffuse * light combine here.

Normally just a full screen quad that renders the final result to the screen.

More Stuff

  • As mentioned earlier we want to get rid of the position buffer. Use the depth buffer with your projection to reconstruct the position.
  • You don't need to use light volumes. Some prefer to simply render a quad large enough to cover the area on the screen.
  • The example above do not cover issues like how to define unique materials for each object. There are many resources and variants of gbuffer formats out there. Some prefer to save a material index in the alpha channel (in the diffuse buffer), then lookup up a row in a texture to get material properties.
  • Directional lights and other light types affecting the entire scene can easily be handled by rendering a full screen quad into the light accumulation buffer
  • Spot lights are also nice to have and also fairly easy to implement
  • We probably want more light properties
  • We might want some way to weight how the diffuse and light buffer is combined to support ambient and emissive
  • There are many ways to store normals in a more compact way. You can for example use spherical coordinates to remove one value. There are tons of articles about deferred lighting and gbuffer formats out there. Looking at the formats people are using can give you some ideas. Just make sure your gbuffer don't get too fat.
  • Reconstructing the view position using the linearized depth value and your projection is not that hard. You need to construct a vector using the projection constants. Multiply it with your depth value (between 0 and 1) to get the view position. There are several articles out there. It's just two lines of code.

There's probably a lot to pick in in this post, but hopefully it shows the general principle. None one the shaders have been compiled. This was just converted from 3.3 to 1.2 by memory.

There several approaches to light accumulation. You might want to reduce the number of draw calls making VBOs with 1000 cubes and cones to batch-draw everything. With more modern GL versions you can also use the geometry shader to calculate a quad that would cover the light area for each light. Probably the best way is to use compute shaders, but that requires GL 4.3. The advantage here is that you can iterate all the light information and do one single write. There are also pseudo-compute methods were you divide the screen into a rough grid and assign a light list to each cell. This can be done only with a fragment shader, but requires you to build the light lists on the CPU and sending in the data to the shader though UBOs.

The compute shader method is by far the simplest one to make. It removes a lot of the complexity in the older methods to keep track and organize everything. Simply iterate the lights and do one single write to the framebuffer.

like image 65
Grimmy Avatar answered Dec 15 '22 15:12

Grimmy