Rasterisation (triangles) and ray tracing are the only methods I've ever come across to render a 3D scene. Are there any others? Also, I'd love to know of any other really "out there" ways of doing 3D, such as not using polygons.
Aagh! These answers are very uninformed!
Of course, it doesn't help that the question is imprecise.
OK, "rendering" is a really wide topic. One issue within rendering is camera visibility or "hidden surface algorithms" -- figuring out what objects are seen in each pixel. There are various categorizations of visibility algorithms. That's probably what the poster was asking about (given that they thought of it as a dichotomy between "rasterization" and "ray tracing").
A classic (though now somewhat dated) categorization reference is Sutherland et al "A Characterization of Ten Hidden-Surface Algorithms", ACM Computer Surveys 1974. It's very outdated, but it's still excellent for providing a framework for thinking about how to categorize such algorithms.
One class of hidden surface algorithms involves "ray casting", which is computing the intersection of the line from the camera through each pixel with objects (which can have various representations, including triangles, algebraic surfaces, NURBS, etc.).
Other classes of hidden surface algorithms include "z-buffer", "scanline techniques", "list priority algorithms", and so on. They were pretty darned creative with algorithms back in the days when there weren't many compute cycles and not enough memory to store a z-buffer.
These days, both compute and memory are cheap, and so three techniques have pretty much won out: (1) dicing everything into triangles and using a z-buffer; (2) ray casting; (3) Reyes-like algorithms that uses an extended z-buffer to handle transparency and the like. Modern graphics cards do #1; high-end software rendering usually does #2 or #3 or a combination. Though various ray tracing hardware has been proposed, and sometimes built, but never caught on, and also modern GPUs are now programmable enough to actually ray trace, though at a severe speed disadvantage to their hard-coded rasterization techniques. Other more exotic algorithms have mostly fallen by the wayside over the years. (Although various sorting/splatting algorithms can be used for volume rendering or other special purposes.)
"Rasterizing" really just means "figuring out which pixels an object lies on." Convention dictates that it excludes ray tracing, but this is shaky. I suppose you could justify that rasterization answers "which pixels does this shape overlap" whereas ray tracing answers "which object is behind this pixel", if you see the difference.
Now then, hidden surface removal is not the only problem to be solved in the field of "rendering." Knowing what object is visible in each pixel is only a start; you also need to know what color it is, which means having some method of computing how light propagates around the scene. There are a whole bunch of techniques, usually broken down into dealing with shadows, reflections, and "global illumination" (that which bounces between objects, as opposed to coming directly from lights).
"Ray tracing" means applying the ray casting technique to also determine visibility for shadows, reflections, global illumination, etc. It's possible to use ray tracing for everything, or to use various rasterization methods for camera visibility and ray tracing for shadows, reflections, and GI. "Photon mapping" and "path tracing" are techniques for calculating certain kinds of light propagation (using ray tracing, so it's just wrong to say they are somehow fundamentally a different rendering technique). There are also global illumination techniques that don't use ray tracing, such as "radiosity" methods (which is a finite element approach to solving global light propagation, but in most parts of the field have fallen out of favor lately). But using radiosity or photon mapping for light propagation STILL requires you to make a final picture somehow, generally with one of the standard techniques (ray casting, z buffer/rasterization, etc.).
People who mention specific shape representations (NURBS, volumes, triangles) are also a little confused. This is an orthogonal problem to ray trace vs rasterization. For example, you can ray trace nurbs directly, or you can dice the nurbs into triangles and trace them. You can directly rasterize triangles into a z-buffer, but you can also directly rasterize high-order parametric surfaces in scanline order (c.f. Lane/Carpenter/etc CACM 1980).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With