Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Plane reprojection in post-process using homography matrix

Let's take a proxy problem - You're looking at some 3D scene. Now I replace the scene with its rendering done from your eyes position, so you don't see any difference. Then I replace the rendered image with a wall and projector. What I need is an image, that when projected on the wall will look exactly as if you were looking at the scene.

It looks like this: Here's my setup The left camera is the observer, the right one is projector.

My approach is to render the scene from observers location, then in the post process I sample rendered image to add distortion.

I have some proof of concept code that kind of works up to some offsets that I need to debug, but most of the computations is done in pixel shader so it's not the best solution.

After I did my initial version I read about homography matrices and it seems to be the right tool for my needs. If I understand it correctly I should be able to compute the homography matrix and then only multiply my screen space UV with that to get reprojected UVs.

Ufortunately most of the info about homography I could find relates to the case when I have 2 pictures of some object, pick by hand 4 corresponding point pairs and compute a matrix from that, but I don't have such points. Instead, I know the exact transforms of both views, their vertical and horizontal FOV and plane so I think that's all I need.

The perfect solution would be to have some transform that maps my image-space UVs in [0,1] range to correct UV for texture sampling. Did any of you saw similar solution?

EDIT:

I've made a screenshot from my POC implementation: Reprojected with ugly code I'm not sure it's 1:1 correct but it shows what I need. The rendering is distorted, but this distortion should cancel out when I project this image on the wall. Please take a look at the table top - it still look like rendered from the side and not from projector. When I project this image on the wall it should look like rendered from your eyes position.

like image 437
kolenda Avatar asked Oct 15 '25 16:10

kolenda


2 Answers

I've finally found a solution.

In some of the articles about Homography I saw an equation to compute H matrix from known transformations of two viewports but couldn't get it to work, maybe I've misunderstood something or had my math wrong so I decided to try the 4-points approach, but how to get such points automatically?

Given, that I'm going to use this in post-processing stage of my rendering, thus operating mainly in screen-space I decided to map corners of my destination viewport to my source viewport.

From Projector transform I get the forward direction and rotate it by half of vertical and horizontal FOV. Combining positive and negative values I can get 4 vectors that correspond to 4 corners of my viewport, computed in World space.

Using Projector position as a start and 4 computed vectors as directions I can compute 4 intersections with my Plane. Those are World-space points, lying on my Plane that represents corners of what is visible from projector.

Having those 4 points I project them with Observer camera to its Screen-space.

The points I get here are the Observer-screen-space points and given that I've mapped entire viewport of Projector I can use typical [0,0],[1,0],[0,1],[1,1] rectangle as a Projector-screen-space points.

This way I've got four corresponding pairs of points to use with my Homography.

Homography

This was a bit trickier as I still don't fully understand the math here, but I've used this: http://www.cse.psu.edu/~rtc12/CSE486/lecture16.pdf article, on page 29 there's a matrix equation using 8x8 square matrix. The link may disappear but you can find the same equation in a lot of places, what's important is that I've used 8x8 version, but saw notations using 8x9 matrix.

Here's the screen:

enter image description here In general it's a A*x=b notation where we know A and b and want to compute x.

For computations I've used JacobiSVD class from the Eigen library.

After I get resulting 8 floats I can build 3 row vectors of my matrix I'm looking for. First 3 values form first vector, another 3 values form 2nd vector, then we're left with only 2 values, so we append a 1.0f to get the last vector.

Let's call this matrix H.

Pixel Shader

Having all those computations done per-frame the Pixel shader is really simple - we just need to transform screen-space UV vector with the H matrix (but in Homogeneous coordinates).

We get the u,v values and append 1.0 to get [u,v,1.0] then we multiply it with the H matrix. We've passed this matrix as a row vectors so we can dot product the [u,v,1.0] vector with 3 rows and sum up the results. This way we get a result like [x,y,z] but in fact it means a [x/z,y/z] 2D vector - this is our UV we were looking for. Now I just sample a texture with this UV and I'm done.

Why is it better than Pavel approach?

I don't need a separate geometry rendering, which is especially slow in deferred rendering engines, like Unreal.

I can use few such textures at once, all on different planes. I can e.g. pass few alpha-masked and textured planes and decide which one I hit.

I can reproject a 3D side-by-side rendering to get reprojected side-by-side, by treating each half of a screen separately.

like image 95
kolenda Avatar answered Oct 19 '25 12:10

kolenda


From our discussion, keystone correction is not implemented in UE. So I'would sugest following:

  1. Create desired scene with Observer camera
  2. Add Plane (procedural!) and Main camera somewhere on the screen, Main camera is looking on plane
  3. Make Main camera as player's main camera
  4. Create material as RenderTarget (tutorial) and set Observer camera to be source for that material.

Now you have setup that renders scene on plane and player will see that plane.

If you need to simulate keystone correction, you can change Plane's position or rotation. (You can bind that actions to your setup GUI etc.) And since it is procedural, you can change size (and shape) in vertical or horizontal manner. (UE docs)

Since you have one plane with constant UV's and you are changing it's shape, the result will be keystone efect on that plane, which (in case of correct plane settings) will lead to be inverse of keystone effect on HW projector thus generating correct image.

For settings all you need is to reset all settings on plane (rotation, position, scale, etc.) and set another Material, preferably with square(s) - something you can easilly see and which can help you to correctly modify the Plane. So it is just messing with projection plane.

like image 33
Pavel Pája Halbich Avatar answered Oct 19 '25 14:10

Pavel Pája Halbich



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!