Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to correctly linearize depth in OpenGL ES in iOS?

I'm trying to render a forrest scene for an iOS App with OpenGL. To make it a little bit nicer, I'd like to implement a depth effect into the scene. However I need a linearized depth value from the OpenGL depth buffer to do so. Currently I am using a computation in the fragment shader (which I found here).

Therefore my terrain fragment shader looks like this:

#version 300 es

precision mediump float;
layout(location = 0) out lowp vec4 out_color;

float linearizeDepth(float depth) {
    return 2.0 * nearz / (farz + nearz - depth * (farz - nearz));
}

void main(void) {
    float depth = gl_FragCoord.z;
    float linearized = (linearizeDepth(depth));
    out_color = vec4(linearized, linearized, linearized, 1.0);
}

However, this results in the following output:

resulting output As you can see, the "further" you get away, the more "stripy" the resulting depth value gets (especially behind the ship). If the terrain tile is close to the camera, the output is somewhat okay..

I even tried another computation:

float linearizeDepth(float depth) {
    return 2.0 * nearz * farz / (farz + nearz - (2.0 * depth - 1.0) * (farz - nearz));
}

which resulted in a way too high value so I scaled it down by dividing:

float linearized = (linearizeDepth(depth) - 2.0) / 40.0;

second resulting output

Nevertheless, it gave a similar result.

So how do I achieve a smooth, linear transition between the near and the far plane, without any stripes? Has anybody had a similar problem?

like image 871
Aseider Avatar asked Feb 28 '17 13:02

Aseider


People also ask

How do you Linearize depth?

To linearize the sampled depth-buffer value, we can multiply the native device coordinates (ndc) vector by the inverse projection matrix and divide the result by the w coordinate (as the result is a homogenous vector).

How do I enable depth in opengl?

To enable depth testing, call glEnable with GL_DEPTH_TEST. When rendering to a framebuffer that has no depth buffer, depth testing always behaves as though the test is disabled. When depth testing is disabled, writes to the depth buffer are also disabled.

What is depth buffer in opengl?

A depth buffer, also known as a z-buffer, is a type of data buffer used in computer graphics to represent depth information of objects in 3D space from a particular perspective. Depth buffers are an aid to rendering a scene to ensure that the correct polygons properly occlude other polygons.


1 Answers

the problem is that you store non linear values which are truncated so when you peek the depth values later on you got choppy result because you lose accuracy the more you are far from znear plane. No matter what you evaluate you will not obtain better results unless:

  1. Lower accuracy loss

    You can change znear,zfar values so they are closer together. enlarge znear as much as you can so the more accurate area covers more of your scene.

    Another option is to use more bits per depth buffer (16 bits is too low) not sure if can do this in OpenGL ES but in standard OpenGL you can use 24,32 bits on most cards.

  2. use linear depth buffer

    So store linear values into depth buffer. There are two ways. One is compute depth so after all the underlying operations you will get linear value.

    Another option is to use separate texture/FBO and store the linear depths directly to it. The problem is you can not use its contents in the same rendering pass.

[Edit1] Linear Depth buffer

To linearize depth buffer itself (not just the values taken from it) try this:

Vertex:

varying float depth;
void main()
    {
    vec4 p=ftransform();
    depth=p.z;
    gl_Position=p;
    gl_FrontColor = gl_Color;
    }

Fragment:

uniform float znear,zfar;
varying float depth; // original z in camera space instead of gl_FragCoord.z because is already truncated
void main(void)
    {
    float z=(depth-znear)/(zfar-znear);
    gl_FragDepth=z;
    gl_FragColor=gl_Color;
    }

Non linear Depth buffer linearized on CPU side (as you do): CPU

Linear Depth buffer GPU side (as you should): GPU

The scene parameters are:

// 24 bits per Depth value
const double zang =   60.0;
const double znear=    0.01;
const double zfar =20000.0;

and simple rotated plate covering whole depth field of view. Booth images are taken by glReadPixels(0,0,scr.xs,scr.ys,GL_DEPTH_COMPONENT,GL_FLOAT,zed); and transformed to 2D RGB texture on CPU side. Then rendered as single QUAD covering whole screen on unit matrices ...

Now to obtain original depth value from linear depth buffer you just do this:

z = znear + (zfar-znear)*depth_value;

I used the ancient stuff just to keep this simple so port it to your profile ...

Beware I do not code in OpenGL ES nor IOS so I hope I did not miss something related to that (I am used to Win and PC).

To show the difference I added another rotated plate to the same scene (so they intersect) and use colored output (no depth obtaining anymore):

intersect

As you can see linear depth buffer is much much better (for scenes covering large part of depth FOV).

like image 138
Spektre Avatar answered Oct 14 '22 15:10

Spektre