Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

unity3d: Use main camera's depth buffer for rendering another camera view

After my main camera renders, I'd like to use (or copy) its depth buffer to a (disabled) camera's depth buffer. My goal is to draw particles onto a smaller render target (using a separate camera) while using the depth buffer after opaque objects are drawn. I can't do this in a single camera, since the goal is to use a smaller render target for the particles for performance reasons.

Replacement shaders in Unity aren't an option either: I want my particles to use their existing shaders - i just want the depth buffer of the particle camera to be overwritten with a subsampled version of the main camera's depth buffer before the particles are drawn.

I didn't get any reply to my earlier question; hence, the repost.

Here's the script attached to my main camera. It renders all the non-particle layers and I use OnRenderImage to invoke the particle camera.

public class MagicRenderer : MonoBehaviour {
public Shader   particleShader; // shader that uses the main camera's depth buffer to depth test particle Z
public Material blendMat;       // material that uses a simple blend shader
public int      downSampleFactor = 1;

private RenderTexture particleRT;
private static GameObject pCam;

void Awake () {
    // make the main cameras depth buffer available to the shaders via _CameraDepthTexture
    camera.depthTextureMode = DepthTextureMode.Depth;
}

// Update is called once per frame
void Update () {

}

void OnRenderImage(RenderTexture src, RenderTexture dest) {
            // create tmp RT
            particleRT = RenderTexture.GetTemporary (Screen.width / downSampleFactor, Screen.height / downSampleFactor, 0);
            particleRT.antiAliasing = 1;

            // create particle cam
            Camera pCam = GetPCam ();
            pCam.CopyFrom (camera); 
            pCam.clearFlags = CameraClearFlags.SolidColor;
            pCam.backgroundColor = new Color (0.0f, 0.0f, 0.0f, 0.0f);
            pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
            pCam.useOcclusionCulling = false;
            pCam.targetTexture = particleRT;
            pCam.depth = 0;

            // Draw to particleRT's colorBuffer using mainCam's depth buffer
            // ?? - how do i transfer this camera's depth buffer to pCam?
            pCam.Render ();
            // pCam.RenderWithShader (particleShader, "Transparent"); // I don't want to replace the shaders my particles use; os shader replacement isnt an option.

    // blend mainCam's colorBuffer with particleRT's colorBuffer
    // Graphics.Blit(pCam.targetTexture, src, blendMat);        

    // copy resulting buffer to destination
    Graphics.Blit (pCam.targetTexture, dest);


    // clean up
    RenderTexture.ReleaseTemporary(particleRT);
}

static public Camera GetPCam() {
    if (!pCam) {
        GameObject oldpcam = GameObject.Find("pCam");
        Debug.Log (oldpcam);
        if (oldpcam) Destroy(oldpcam);

        pCam = new GameObject("pCam");
        pCam.AddComponent<Camera>();
        pCam.camera.enabled = false;
        pCam.hideFlags = HideFlags.DontSave;
    }

    return pCam.camera;
}

}

I've a few additional questions:

1) Why does camera.depthTextureMode = DepthTextureMode.Depth; end up drawing all the objects in the scene just to write to the Z-buffer? Using Intel GPA, I see two passes before OnRenderImage gets called: (i) Z-PrePass, that only writes to the depth buffer (ii) Color pass, that writes to both the color and depth buffer.

2) I re-rendered the opaque objects to pCam's RT using a replacement shader that writes (0,0,0,0) to the colorBuffer with ZWrite On (to overcome the depth buffer transfer problem). After that, I reset the layers and clear mask as follows:

pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.clearFlags = CameraClearFlags.Nothing;

and rendered them using pCam.Render().

I thought this would render the particles using their existing shaders with the ZTest. Unfortunately, what I notice is that the depth-stencil buffer is cleared before the particles are drawn (inspite me not clearing anything..).

Why does this happen?

like image 511
Raja Avatar asked Mar 27 '14 18:03

Raja


1 Answers

It's been 5 years but I delevoped an almost complete solution for rendering particles in a smaller seperate render target. I write this for future visitors. A lot of knowledge is still required.

Copying the depth

First, you have to get the scene depth in the resolution of your smaller render texture. This can be done by creating a new render texture with the color format "depth". To write the scene depth to the low resolution depth, create a shader that just outputs the depth:

struct fragOut{
    float depth : DEPTH;
};

sampler2D _LastCameraDepthTexture;

fragOut frag (v2f i){
    fragOut tOut;
    tOut.depth = tex2D(_LastCameraDepthTexture, i.uv).x;
    return tOut;
}

_LastCameraDepthTexture is automatically filled by Unity, but there is a downside. It only comes for free if the main camera renders with deferred rendering. For forward shading, Unity seems to render the scene again just for the depth texture. Check the frame debugger.

Then, add a post processing effect to the main camera that executes the shader:

protected virtual void OnRenderImage(RenderTexture pFrom, RenderTexture pTo) {
    Graphics.Blit(pFrom, mSmallerSceneDepthTexture, mRenderToDepthMaterial);
    Graphics.Blit(pFrom, pTo);
}

You can probably do this without the second blit, but it was easier for me for testing.

Using the copied depth for rendering

To use the new depth texture for your second camera, call

mSecondCamera.SetTargetBuffers(mParticleRenderTexture.colorBuffer, mSmallerSceneDepthTexture.depthBuffer);

Keep targetTexture empty. You then must ensure the second camera does not clear the depth, only the color. For this, disable clear on the second camera completely and clear manually like this

Graphics.SetRenderTarget(mParticleRenderTexture);
GL.Clear(false, true, Color.clear);

I recommend to also render the second camera by hand. Disable it and call

mSecondCamera.Render();

after clearing.

Merging

Now you have to merge the main view and the seperate layer. Depending on your rendering, you will probably end up with a render texture with so called premultiplied alpha.

To mix this with the rest, use a post processing step on the main camera with

fixed4 tBasis = tex2D(_MainTex, i.uv);
fixed4 tToInsert = tex2D(TransparentFX, i.uv);

//beware premultiplied alpha in insert
tBasis.rgb = tBasis.rgb * (1.0f- tToInsert.a) + tToInsert.rgb;
return tBasis;

Additive materials work out of the box, but alpha blended do not. You have to create a shader with custom blending to create working alpha blended materials. The blending is

Blend SrcAlpha OneMinusSrcAlpha, One OneMinusSrcAlpha

This changes how the alpha channel is modified for every performed blending.

Results

add blended in front of alpha blended add blended in front of alpha blended

fx layer rgb rgb

fx layer alpha alpha

alpha blended in front of add blended alpha blended in front of add blended

fx layer rgb rgb

fx layer alpha alpha

I did not test yet if the performance actually increases. If anyone has a simpler solution, let me know please.

like image 122
CGMan Avatar answered Sep 27 '22 23:09

CGMan