Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What's the concept of and differences between Framebuffer and Renderbuffer in OpenGL?

People also ask

What is the different between Renderbuffer and framebuffer?

A framebuffer object is more or less just a managing construct. It manages a complete framebuffer at a whole with all its sub-buffers, like the color buffers, the depth buffer and the stencil buffer. The textures or renderbuffers comprise the actual storage for the individual sub-buffers.

What is a Renderbuffer OpenGL?

Renderbuffer Objects are OpenGL Objects that contain images. They are created and used specifically with Framebuffer Objects. They are optimized for use as render targets, while Textures may not be, and are the logical choice when you do not need to sample (i.e. in a post-pass shader) from the produced image.

What does a framebuffer do?

A framebuffer (frame buffer, or sometimes framestore) is a portion of random-access memory (RAM) containing a bitmap that drives a video display. It is a memory buffer containing data representing all the pixels in a complete video frame. Modern video cards contain framebuffer circuitry in their cores.

What is a framebuffer Vulkan?

Framebuffers represent a collection of memory attachments that are used by a render pass instance. Examples of these memory attachments include the color image buffers and depth buffer that we created in previous samples.


The Framebuffer object is not actually a buffer, but an aggregator object that contains one or more attachments, which by their turn, are the actual buffers. You can understand the Framebuffer as C structure where every member is a pointer to a buffer. Without any attachment, a Framebuffer object has very low footprint.

Now each buffer attached to a Framebuffer can be a Renderbuffer or a texture.

The Renderbuffer is an actual buffer (an array of bytes, or integers, or pixels). The Renderbuffer stores pixel values in native format, so it's optimized for offscreen rendering. In other words, drawing to a Renderbuffer can be much faster than drawing to a texture. The drawback is that pixels uses a native, implementation-dependent format, so that reading from a Renderbuffer is much harder than reading from a texture. Nevertheless, once a Renderbuffer has been painted, one can copy its content directly to screen (or to other Renderbuffer, I guess), very quickly using pixel transfer operations. This means that a Renderbuffer can be used to efficiently implement the double buffer pattern that you mentioned.

Renderbuffers are a relatively new concept. Before them, a Framebuffer was used to render to a texture, which can be slower because a texture uses a standard format. It is still possible to render to a texture, and that's quite useful when one needs to perform multiple passes over each pixel to build a scene, or to draw a scene on a surface of another scene!

The OpenGL wiki has this page that shows more details and links.


This page has some details which I think explain the difference quite nicely. Firstly:

The final rendering destination of the OpenGL pipeline is called [the] framebuffer.

Whereas:

Renderbuffer Object
In addition, renderbuffer object is newly introduced for offscreen rendering. It allows to render a scene directly to a renderbuffer object, instead of rendering to a texture object. Renderbuffer is simply a data storage object containing a single image of a renderable internal format. It is used to store OpenGL logical buffers that do not have corresponding texture format, such as stencil or depth buffer.