I am rendering 3D objects on a 2D canvas by doing all necessary calculations in software. I am not using graphics acceleration.
Initially all the objects were cubes of same size, so I could sort them based on their distance in Z from camera and that would order them correctly. But now I am trying to draw cubes of varying dimensions. That makes my simple z-ordering algorithm fail in perspective projection.
I looked into computer graphics books and found the techniques used, they eventually recommend pixel based comparision of two polygons to determine which one is ahead of other. Probably that's what they do in graphics card. But doing so in software seems overly difficult and I guess it will be slow for practical use even if I can do it.
Is there a simple trick to do this in software? Any examples from early days of 3D graphics, when graphics cards were not available?
Although this is generic 3D graphics question, if it helps, I am doing this on top of HTML5 Canvas 2D API.
As @ybungalobill has already mentioned, z-buffer is the easiest algorithm to implement. When you are filling the triangles/polygons that make up your cubes, interpolate the Z coordinate between them and store it per-pixel. If you later fill another polygon that renders on the same X, Y coordinate, check if its Z is less than the Z already stored in the buffer. Do not forget to clear the Z buffer to infinity before repainting. Pseudocode:
foreach (scanline in polygon) {
int length = scanline.right.x - scanline.left.x;
foreach (pixel in scanline) {
float t = (float)pixel.x / length;
float z = (1 - t) * scanline.left.z + t * scanline.right.z; // Interpolate the Z coordinate
if (z < zbuffer[scanline.y][pixel.x])
drawPixel(pixel.x, scanline.y, polygon.color); // The pixel is closer, paint it
}
}
A revised approach of Z buffer that performs better on CPU by not drawing pixels that would be overwritten is called segment buffer: http://www.gamedev.net/reference/articles/article668.asp
Another approach is the Warnock's algorithm. It uses recursion which makes it hard to use on GPUs but CPU should do fine if you use your own stack to avoid stack overflow. The idea lies in dividing the scene into 4 parts and checking if there's only one polygon covering the whole part. If not split it again until the condition is met (it will be met at the pixel level in worst case). Pseudocode:
void warnock(Rectangle rect)
{
float minZ = infinity;
foreach (polygon in polygons) {
if (rect is inside polygon) {
float z = interpolateZ(polygon, rect.x + rect.width / 2, rect.y + rect.height / 2); // Get Z coordinate at the centre of the rectangle
if (z < minZ) { // If there are more polygons in this rectangle, make sure the topmost one gets drawn last
fillRect(polygon.color);
minZ = z;
}
} else {
// Divide to 4 subrectangles
warnock(Rectangle(rect.x, rect.y, rect.width / 2, rect.height / 2)); // Top left
warnock(Rectangle(rect.x, rect.y + rect.height / 2, rect.width / 2, rect.height / 2)); // Bottom left
warnock(Rectangle(rect.x + rect.width / 2, rect.y, rect.width / 2, rect.height / 2)); // Bottom right
warnock(Rectangle(rect.x + rect.width / 2, rect.y + rect.height / 2, rect.width / 2, rect.height / 2)); // Top right
}
}
}
The painter's algorithm is what you have done with your cubes, the only difference is that it sorts the polygons instead of whole objects. Even then it is difficult to solve various polygon intersections and I personally wouldn't use it for non-trivial scenes.
Another algorithm you might use is the backface culling algorithm. This works only for convex objects that do not overlap though. The algorithm calculates the normal of each polygon and if it points in the direction from the camera, it is removed.
Raycasting is another way to determine the visibility per-pixel. It is, however, quite CPU intensive. The basic idea is to check for each pixel of the screen, what polygon intersects it (what polygon gets hit by the ray casted from the current pixel). The origin of the rays is eye position. Pseudocode:
foreach (pixel in screen) {
float minZ = infinity; // Can be zfar from the perspective projection
Color pixelColor = backgroundColor;
foreach (polygon in projectedPolygons) {
if (polygon contains Point(pixel.x, pixel.y)) {
float z = interpolateZ(polygon, pixel.x, pixel.y); // Get the current Z for (x, y) and this polygon using bilinear interpolation
if (z < minZ) {
minZ = z;
pixelColor = polygon.color;
}
}
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With