Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

perspective correction of texture coordinates in 3d

Tags:

math

opengl

3d

I'm writing a software renderer which is currently working well, but I'm trying to get perspective correction of texture coordinates and that doesn't seem to be correct. I am using all the same matrix math as opengl for my renderer. To rasterise a triangle I do the following:

  1. transform the vertices using the modelview and projection matrixes, and transform into clip coordinates.

  2. for each pixel in each triangle, calculate barycentric coordinates to interpolate properties (color, texture coordinates, normals etc.)

  3. to correct for perspective I use perspective correct interpolation: (w is depth coordinate of vertex, c is texture coordinate of vertex, b is the barycentric weight of a vertex)

1/w = b0*(1/w0) + b1*(1/w1) + b2*(1/w2)
c/w = b0*(c0/w0) + b1*(c1/w1) + b2*(c2/w2)
c = (c/w)/(1/w)

This should correct for perspective, and it helps a little, but there is still an obvious perspective problem. Am I missing something here, perhaps some rounding issues (I'm using floats for all math)?

See in this image the error in the texture coordinates evident along the diagonal, this is the result having done the division by depth coordinates.

image showing incorrect perspective correction

Also, this is usually done for texture coordinates... is it necessary for other properties (e.g. normals etc.) as well?

like image 442
James Edge Avatar asked Mar 24 '11 18:03

James Edge


People also ask

What is perspective correct?

Perspective correction, also known as perspective control or keystone correction, corrects parallel lines. For example when taking a picture of a tower, it may be necessary to tilt the camera in order to have the whole tower in the viewframe. But then, the verticals of the tower are not parallel in the picture.

What are the texture coordinates of mesh vertices?

Texture coordinates, also called UVs, are pairs of numbers stored in the vertices of a mesh. These numbers are often used to stretch a 2D texture onto a 3D mesh, but they can be used for other things like coloring the mesh (see Vertex color), controlling the flow across the surface (see Flow map), etc.

What is affine texture mapping?

Affine texture mapping Some software and hardware (such as the original PlayStation) project vertices in 3D space onto the screen during rendering and linearly interpolate the texture coordinates in screen space between them ("inverse texture mapping").


2 Answers

I cracked the code on this issue recently. You can use a homography if you plan on modifying the texture in memory prior to assigning it to the surface. That's computationally expensive and adds an additional dependency to your program. There's a nice hack that'll fix the problem for you.

OpenGL automatically applies perspective correction to the texture you are rendering. All you need to do is multiply your texture coordinates (UV - 0.0f-1.0f) by the Z component (world space depth of an XYZ position vector) of each corner of the plane and it'll "throw off" OpenGL's perspective correction.

I asked and solved this problem recently. Give this link a shot:

texture mapping a trapezoid with a square texture in OpenGL

The paper I read that fixed this issue is called, "Navigating Static Environments Using Image-Space Simplification and Morphing" - page 9 appendix A.

Hope this helps!

ct

like image 65
ct_ Avatar answered Oct 11 '22 10:10

ct_


You need to inform OpenGL that you need perspective correction on pixels with

glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_NICEST)

What you are observing is the typical distortion of linear texture mapping. On hardware that is not capable of per-pixel perspective correction (like for example the PS1) the standard solution is just subdividing in smaller polygons to make the defect less noticeable.

like image 21
6502 Avatar answered Oct 11 '22 08:10

6502