On an NVIDIA card I can perform full scene anti-aliasing using the accumulation buffer something like this:
if(m_antialias)
{
glClear(GL_ACCUM_BUFFER_BIT);
for(int j = 0; j < antialiasing; j++)
{
accPerspective(m_camera.FieldOfView(), // Vertical field of view in degrees.
aspectratio, // The aspect ratio.
20., // Near clipping
1000.,
JITTER[antialiasing][j].X(), JITTER[antialiasing][j].Y(),
0.0, 0.0, 1.0);
m_camera.gluLookAt();
ActualDraw();
glAccum(GL_ACCUM, float(1.0 / antialiasing));
glDrawBuffer(GL_FRONT);
glAccum(GL_RETURN, float(antialiasing) / (j + 1));
glDrawBuffer(GL_BACK);
}
glAccum(GL_RETURN, 1.0);
}
On ATI cards the accumulation buffer is not implemented, and everyone says that you can do that in shader language now. The problem with that, of course, is that GLSL is a pretty high barrier to entry for an OpenGL beginner.
Can anyone point me to something that will show me how to do whole-scene anti-aliasing in a way that ATI cards can do, and that a newbie can understand?
You need to enable polygon antialiasing by passing GL_POLYGON_SMOOTH to glEnable(). This causes pixels on the edges of the polygon to be assigned fractional alpha values based on their coverage, as though they were lines being antialiased. Also, if you desire, you can supply a value for GL_POLYGON_SMOOTH_HINT.
Besides having limited access from the rendering pipeline, the accumulation buffer differs from a generic off-screen framebuffer in a number of ways. First, the color representation is different.
Multisample anti-aliasing (MSAA) is a type of spatial anti-aliasing, a technique used in computer graphics to remove jaggies.
glAccum(GL_ACCUM, mult): This takes the contents of the current draw buffer, multiplies each pixel value by mult and adds this to the current accumulation buffer contents.
Why would you ever do antialiasing this way, regardless of whether you have accumulation buffers or not? Just use multisampling; it's not free, but it's much cheaper than what you're doing.
First, you have to create a context with a multisampled buffer. That means you need to use WGL/GLX_ARB_multisample, which means that on Windows, you need to do two-stage context creation. You should request a pixel format with 1 *_SAMPLE_BUFFERS_ARB
and some number of *_SAMPLES_ARB
. The larger the number of samples, the better the antialiasing (also the slower). You can get the maximum number of samples with wglGetPixelFormatAttribfv
or glXGetConfig
.
Once you have successfully created a context with a multisample framebuffer, you render as normal, with one exception: call glEnable(GL_MULTISAMPLE)
in your setup code. This will activate multisampled rendering.
And that's all you need.
Alternatively, if you're using GL 3.x or have access to ARB_framebuffer_object, you can skip the context stuff and create a multisampled framebuffer. Your depth buffer and color buffer(s) must all have the same number of samples. I would suggest using renderbuffers for these, since you're still using fixed-function (and you can't texture from a multisample texture in the fixed-function pipeline).
You create multisampled renderbuffers for color and depth (they must have the same number of samples). You set them up in an FBO, and render into them (with glEnable(GL_MULTISAMPLE)
, of course). When you're done, you then use glBlitFramebuffer
to blit from your multisample framebuffer into the back-buffer (which shouldn't be multisampled).
The problem with that, of course, is that GLSL is a pretty high barrier to entry for an OpenGL beginner.
Says who? There is nothing wrong with a beginner learning from shaders. Indeed, in my experience, such beginners often learn better, because they understand the details of what's going on more effectively.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With