Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Depth of Field: combining a point shader with a blur shader (Processing 3)

I would like to display thousands of points on a 3D canvas (in Processing) with a Depth of Field effect. More specifically, I would like to use a z-buffer (depth buffering) to adjust the level of blur of a point based on its distance from the camera.

So far, I could come up with the following point shader:

pointfrag.glsl

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

varying vec4 vertColor;
uniform float maxDepth;

void main() {

  float depth = gl_FragCoord.z / gl_FragCoord.w;
  gl_FragColor = vec4(vec3(vertColor - depth/maxDepth), 1) ;

}

pointvert.glsl

uniform mat4 projection;
uniform mat4 modelview;

attribute vec4 position;
attribute vec4 color;
attribute vec2 offset;


varying vec4 vertColor;
varying vec4 vertTexCoord;

void main() {
  vec4 pos = modelview * position;
  vec4 clip = projection * pos;

  gl_Position = clip + projection * vec4(offset, 0, 0);

  vertColor = color;
}

I also have a blur shader (originally from the PostFX library):

blurfrag.glsl

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif


#define PROCESSING_TEXTURE_SHADER

uniform sampler2D texture;

// The inverse of the texture dimensions along X and Y
uniform vec2 texOffset;

varying vec4 vertColor;
varying vec4 vertTexCoord;

uniform int blurSize;       
uniform int horizontalPass; // 0 or 1 to indicate vertical or horizontal pass
uniform float sigma;        // The sigma value for the gaussian function: higher value means more blur
                            // A good value for 9x9 is around 3 to 5
                            // A good value for 7x7 is around 2.5 to 4
                            // A good value for 5x5 is around 2 to 3.5
                            // ... play around with this based on what you need <span class="Emoticon Emoticon1"><span>:)</span></span>

const float pi = 3.14159265;

void main() {  
  float numBlurPixelsPerSide = float(blurSize / 2); 

  vec2 blurMultiplyVec = 0 < horizontalPass ? vec2(1.0, 0.0) : vec2(0.0, 1.0);

  // Incremental Gaussian Coefficent Calculation (See GPU Gems 3 pp. 877 - 889)
  vec3 incrementalGaussian;
  incrementalGaussian.x = 1.0 / (sqrt(2.0 * pi) * sigma);
  incrementalGaussian.y = exp(-0.5 / (sigma * sigma));
  incrementalGaussian.z = incrementalGaussian.y * incrementalGaussian.y;

  vec4 avgValue = vec4(0.0, 0.0, 0.0, 0.0);
  float coefficientSum = 0.0;

  // Take the central sample first...
  avgValue += texture2D(texture, vertTexCoord.st) * incrementalGaussian.x;
  coefficientSum += incrementalGaussian.x;
  incrementalGaussian.xy *= incrementalGaussian.yz;

  // Go through the remaining 8 vertical samples (4 on each side of the center)
  for (float i = 1.0; i <= numBlurPixelsPerSide; i++) { 
    avgValue += texture2D(texture, vertTexCoord.st - i * texOffset * 
                          blurMultiplyVec) * incrementalGaussian.x;         
    avgValue += texture2D(texture, vertTexCoord.st + i * texOffset * 
                          blurMultiplyVec) * incrementalGaussian.x;         
    coefficientSum += 2.0 * incrementalGaussian.x;
    incrementalGaussian.xy *= incrementalGaussian.yz;
  }

  gl_FragColor = (avgValue / coefficientSum);
}

Question:

  • How can I combine the blur fragment shader with the point fragment shader ?

Ideally I'd like to have one single fragment shader that computes the level of blur based on the z-coordinate of a point. Is that even possible ?

Any help would be greatly appreciated.


An example sketch displaying points using the pointfrag.glsl and pointvert.glsl shaders above:

sketch.pde (Python mode + PeasyCam library needed)

add_library('peasycam')
liste = []

def setup():
    global pointShader, cam
    size(900, 900, P3D)
    frameRate(1000)
    smooth(8)

    cam = PeasyCam(this, 500)
    cam.setMaximumDistance(width)
    perspective(60 * DEG_TO_RAD, width/float(height), 2, 6000)

    pointShader = loadShader("pointfrag.glsl", "pointvert.glsl")
    pointShader.set("maxDepth", cam.getDistance()*3)

    for e in range(3000): liste.append(PVector(random(width), random(width), random(width)))

    shader(pointShader, POINTS)
    strokeWeight(2)
    stroke(255)

def draw():

    background(0)
    translate(-width/2, -width/2, -width/2)    
    for e in liste:
        point(e.x, e.y, e.z)

    cam.rotateY(.0002)
    cam.rotateX(.0001)

enter image description here

like image 723
solub Avatar asked Jun 04 '18 21:06

solub


2 Answers

The major issue in your task is, that a gaussian blur shader typically operates in an post process. In common it is applied to the entire viewport after all the geometry has been drawn. The gaussian blur shader takes a fragment of a framebuffer (texture) and its neighbours and mix there color by an gaussian function and stored the new color to a final framebuffer. For this the drawing of the entire scene (all points) has to be finished before.


But you can do something else. Write a shader, that draws the points completely opaque at its center and completely transparent at its exterior border.

In the vertex shader you have to pass the view space vertex coordinate and the view space center of the point to the fragment shader:

pointvert.glsl

uniform mat4 projection;
uniform mat4 modelview;

attribute vec4 position;
attribute vec4 color;
attribute vec2 offset;

varying vec3 vCenter;
varying vec3 vPos;
varying vec4 vColor;


void main() {

    vec4 center = modelview * position;
    vec4 pos    = center + vec4(offset, 0, 0); 

    gl_Position = projection * pos;

    vCenter = center.xyz;
    vPos    = pos.xyz;
    vColor  = color;
}

In the fragment shader you have to calculate the distance from the fragment to the center of the point. To do this you have to know the size of the point. The distance can be used to calculate the opacity and the opacity is the new alpha channel of the point.

Add a uniform variable strokeWeight and set the uniform in the program. Note, because the points are transparent at its borders, the look smaller. I recommend to increase the size of the points:

pointShader.set("strokeWeight", 6.0)

.....

strokeWeight(6)

pointfrag.glsl

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

varying vec3 vCenter;
varying vec3 vPos;
varying vec4 vColor;

uniform float strokeWeight;
uniform float maxDepth;
uniform float focus;

void main() {

    float depth = clamp(abs(vCenter.z)/maxDepth, 0.0, 1.0);
    float blur  = abs(focus-depth);

    float dist_to_center = length(vPos-vCenter)*2.0/strokeWeight;
    float threshold      = max(0.0, blur);
    float opacity        = 1.0 - smoothstep(threshold/2.0, 1.0-threshold/2.0, dist_to_center); 

    gl_FragColor = vec4(vColor.rgb, opacity);
}

You are drawing partly transparent objects. To achiev a proper blending effect you should sort the points by ascending z coordinate:

liste = []
listZ = []

.....

for e in range(3000): listZ.append(random(width))
listZ.sort()
for z in listZ: liste.append(PVector(random(width), random(width), z))

The full example code may look like this:

add_library('peasycam')
liste = []
listZ = []

def setup():
    global pointShader, cam
    size(900, 900, P3D)
    frameRate(1000)
    smooth(8)

    cam = PeasyCam(this, 500)
    cam.setMaximumDistance(width)
    perspective(60 * DEG_TO_RAD, width/float(height), 2, 6000)

    pointShader = loadShader("pointfrag.glsl", "pointvert.glsl")
    pointShader.set("maxDepth", 900.0)
    pointShader.set("strokeWeight", 6.0)

    for e in range(3000): listZ.append(random(width))
    listZ.sort()
    for z in listZ: liste.append(PVector(random(width), random(width), z))

    shader(pointShader, POINTS)
    strokeWeight(6)
    stroke(255)

def draw():

    background(0)
    blendMode(BLEND)
    translate(-width/2, -width/2, -width/2) 
    pointShader.set("focus", map(mouseX, 0, width, 0.2, 1.0))   
    for e in liste:
        point(e.x, e.y, e.z)

    cam.rotateY(.0002)
    cam.rotateX(.0001)

See the preview:

preview 1


Of course it is possible to use the gaussian blur shader too.

The gaussian blur shader, which you present in your question, is a 2 pass post process blur shader. This means it has to be applied in 2 post process passes on the entire viewport. A pass blurs along the horizontal, the other pass blurs along the vertical axis.

To do this you have to do the following steps:

  1. Render the scene to a buffer (image)
  2. Apply the vertical gaussian blur pass to the image and render the result to an new image buffer
  3. Apply the horizontal gaussian blur pass to the result of the vertical gaussian blur pass

A code listing, which uses exactly the shaders from your question may look like this:

add_library('peasycam')
liste = []

def setup():
    global pointShader, blurShader, cam, bufScene, bufBlurV, bufBlurH
    size(900, 900, P3D)
    frameRate(1000)

    cam = PeasyCam(this, 900)
    cam.setMaximumDistance(width)
    perspective(60 * DEG_TO_RAD, width/float(height), 2, 6000)

    pointShader = loadShader("pointfrag.glsl", "pointvert.glsl")
    pointShader.set("maxDepth", cam.getDistance()*3)

    blurShader = loadShader("blurfrag.glsl")
    blurShader.set("texOffset", [1.0/width, 1.0/height])
    blurShader.set("blurSize", 40);
    blurShader.set("sigma", 5.0);

    bufScene, bufBlurV, bufBlurH  = [createGraphics(width, height, P3D) for e in range(3)]
    bufScene.smooth(8), bufBlurV.shader(blurShader), bufBlurH.shader(blurShader)

    for e in range(5000): liste.append(PVector(random(width), random(width), random(width)))

def drawScene(pg):
    pg.beginDraw()
    pg.background(0)

    shader(pointShader, POINTS)
    strokeWeight(4)
    stroke(255)

    pushMatrix()
    translate(-width/2, -width/2, 0.0)
    for e in liste:
        point(e.x, e.y, e.z)
    popMatrix()

    pg.endDraw()
    cam.getState().apply(pg)

def draw():
    drawScene(bufScene) 

    bufBlurV.beginDraw()
    blurShader.set("horizontalPass", 0);
    bufBlurV.image(bufScene, 0, 0)
    bufBlurV.endDraw()

    bufBlurH.beginDraw()
    blurShader.set("horizontalPass", 1);
    bufBlurH.image(bufBlurV, 0, 0)
    bufBlurH.endDraw()

    cam.beginHUD()
    image(bufBlurH, 0, 0)
    cam.endHUD()

    cam.rotateY(.0002)
    cam.rotateX(.0001)

See the preview:

preview 2


For an approach, which combines the 2 solutions, see also the answer to your previous question: Depth of Field shader for points/strokes in Processing

Create a depth shader:

depth_vert.glsl

uniform mat4 projection;
uniform mat4 modelview;

attribute vec4 position;
attribute vec2 offset;

varying vec3 vCenter;

void main() {
    vec4 center = modelview * position;
    gl_Position = projection * (center + vec4(offset, 0, 0));
    vCenter = center.xyz;
}

depth_frag.glsl

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

varying vec3 vCenter;

uniform float maxDepth;

void main() {
    float depth = clamp(abs(vCenter.z)/maxDepth, 0.0, 1.0);
    gl_FragColor = vec4(vec3(depth), 1.0);
}

Further a point shader is needed for drawing the points:

point_vert.glsl

uniform mat4 projection;
uniform mat4 modelview;

attribute vec4 position;
attribute vec4 color;
attribute vec2 offset;

varying vec4 vColor;

void main() {
    vec4 pos = modelview * position;
    gl_Position = projection * (pos + vec4(offset, 0, 0));
    vColor = color;
}

point_frag.glsl

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

varying vec4 vColor;

void main() {
    gl_FragColor = vec4(vColor.rgb, 1.0);
}

The 2 pass depth of field, gaussian blur shader looks like this:

blurfrag.glsl

uniform sampler2D tDepth;

uniform float focus;

const float pi = 3.14159265;

void main()
{  
    vec2 vUv = vertTexCoord.st;
    vec4 depth = texture2D( tDepth, vUv );
    float dofblur = abs( depth.x - focus );

    float numBlurPixelsPerSide = float(blurSize / 2) * dofblur; 
    float dofSigma = sigma; 

    vec2 blurMultiplyVec = 0 < horizontalPass ? vec2(1.0, 0.0) : vec2(0.0, 1.0);

    // Incremental Gaussian Coefficent Calculation (See GPU Gems 3 pp. 877 - 889)
    vec3 incrementalGaussian;
    incrementalGaussian.x = 1.0 / (sqrt(2.0 * pi) * dofSigma);
    incrementalGaussian.y = exp(-0.5 / (dofSigma * dofSigma));
    incrementalGaussian.z = incrementalGaussian.y * incrementalGaussian.y;

    vec4 avgValue = vec4(0.0, 0.0, 0.0, 0.0);
    float coefficientSum = 0.0;

    // Take the central sample first...
    avgValue += texture2D(texture, vertTexCoord.st) * incrementalGaussian.x;
    coefficientSum += incrementalGaussian.x;
    incrementalGaussian.xy *= incrementalGaussian.yz;

    // Go through the remaining 8 vertical samples (4 on each side of the center)
    for (float i = 1.0; i <= numBlurPixelsPerSide; i++) { 
        avgValue += texture2D(texture, vertTexCoord.st - i * texOffset * 
                            blurMultiplyVec) * incrementalGaussian.x;         
        avgValue += texture2D(texture, vertTexCoord.st + i * texOffset * 
                            blurMultiplyVec) * incrementalGaussian.x;         
        coefficientSum += 2.0 * incrementalGaussian.x;
        incrementalGaussian.xy *= incrementalGaussian.yz;
    }

    gl_FragColor = (avgValue / coefficientSum);
}

In the program you have to do 4 stages: 1. Render the scene to a buffer (image) 2. Render the "depth" to another image buffer 3. Apply the vertical gaussian blur pass to the image and render the result to an new image buffer 4. Apply the horizontal gaussian blur pass to the result of the vertical gaussian blur pass

add_library('peasycam')
liste = []

def setup():
    global depthShader, point_shader, blurShader, cam, bufDepth, bufScene, bufBlurV, bufBlurH
    size(900, 900, P3D)
    frameRate(1000)

    cam = PeasyCam(this, 900)
    cam.setMaximumDistance(width)
    perspective(60 * DEG_TO_RAD, width/float(height), 2, 6000)

    point_shader = loadShader("point_frag.glsl","point_vert.glsl")
    depthShader = loadShader("depth_frag.glsl","depth_vert.glsl")
    blurShader = loadShader("blurfrag.glsl")

    bufDepth, bufScene, bufBlurV, bufBlurH = [createGraphics(width, height, P3D) for e in range(4)]
    bufDepth.smooth(8)
    bufScene.smooth(8)
    bufBlurV.shader(blurShader)
    bufBlurH.shader(blurShader)

    depthShader.set("maxDepth", 900.0)

    blurShader.set("tDepth", bufScene)
    blurShader.set("texOffset", [1.0/width, 1.0/height])
    blurShader.set("blurSize", 40)
    blurShader.set("sigma", 5.0)

    for e in range(3000): liste.append(PVector(random(width), random(width), random(width)))

def drawScene(pg,sh):
    pg.beginDraw()
    pg.background(0)

    shader(sh, POINTS)
    strokeWeight(6)
    stroke(255)

    pushMatrix()
    translate(-width/2, -width/2, 0.0)
    for e in liste:
        point(e.x, e.y, e.z)
    popMatrix()

    pg.endDraw()
    cam.getState().apply(pg)

def draw():
    drawScene(bufDepth, point_shader) 
    drawScene(bufScene, depthShader)

    blurShader.set("focus", map(mouseX, 0, width, .1, 1))

    bufBlurV.beginDraw()
    blurShader.set("horizontalPass", 0);
    bufBlurV.image(bufScene, 0, 0)
    bufBlurV.endDraw()

    bufBlurH.beginDraw()
    blurShader.set("horizontalPass", 1);
    bufBlurH.image(bufBlurV, 0, 0)
    bufBlurH.endDraw()

    cam.beginHUD()
    image(bufBlurH, 0, 0)
    cam.endHUD()

See the preview:

preview 3

like image 104
Rabbid76 Avatar answered Nov 09 '22 22:11

Rabbid76


I've been willing to try this simple trick that seems to cover your use case for a while :) I implemented in Unity, but the logic is pretty simple and should be easy to adapt.

pretty pretty

If you don't have other primitives than points you can easily preconvolute the blur into a texture. I did mine in Ps, but there are better more accurate ways I'm sure. Photo noise not mandatory

texture of blur steps

From there just compute an offset into the texture depending on the amount of blur (or more exactly, steps n and n-1 and lerp using the reminder). In unity I mapped on the viewspace Z position and a simple mirrored linear decay (I'm not sure what's the actual logic in optics here). (_ProjectionParams.w is the inverse far plane)

half focus = -UnityObjectToViewPos( v.vertex ).z;
focus = abs(focus - _FocalPlane);
focus *= _Dof * _ProjectionParams.w;

edit: I should reference that I saw the idea in a demo from this company, they might however implement it differently I don't know

like image 3
Brice V. Avatar answered Nov 09 '22 20:11

Brice V.