I am trying to write a program that writes video camera frames into a quad. I saw tutorials explaining that with framebuffers can be faster, but still learning how to do it. But then besides the framebuffer, I found that there is also renderbuffers.
The question is, if the purpose is only to write a texture into a quad that will fill up the screen, do I really need a renderbuffer?
I understand that renderbuffers are for depth testing, which I think is only for checking Z position of the pixel, therefore would be silly to have to create a render buffer for my scenario, correct?
Renderbuffer Objects are OpenGL Objects that contain images. They are created and used specifically with Framebuffer Objects. They are optimized for use as render targets, while Textures may not be, and are the logical choice when you do not need to sample (i.e. in a post-pass shader) from the produced image.
A framebuffer (frame buffer, or sometimes framestore) is a portion of random-access memory (RAM) containing a bitmap that drives a video display. It is a memory buffer containing data representing all the pixels in a complete video frame. Modern video cards contain framebuffer circuitry in their cores.
A Framebuffer is a collection of buffers that can be used as the destination for rendering. OpenGL has two kinds of framebuffers: the Default Framebuffer, which is provided by the OpenGL Context; and user-created framebuffers called Framebuffer Objects (FBOs).
The term "frame buffer" traditionally refers to the region of memory that holds the color data for the image displayed on a computer screen. In WebGL, a framebuffer is a data structure that organizes the memory resources that are needed to render an image.
A framebuffer object is a place to stick images so that you can render to them. Color buffers, depth buffers, etc all go into a framebuffer object.
A renderbuffer is like a texture, but with two important differences:
So you're talking about two mostly separate concepts. Renderbuffers do not have to be "for depth testing." That is a common use case for renderbuffers, because if you're rendering the colors to a texture, you usually don't care about the depth. You need a depth because you need depth testing for hidden-surface removal. But you don't need to sample from that depth. So instead of making a depth texture, you make a depth renderbuffer.
But renderbuffers can also use colors rather than depth formats. You just can't attach them as textures. You can still blit from/to them, and you can still read them back with glReadPixels
. You just can't read from them in a shader.
Oddly enough, this does nothing to answer your question:
The question is, if the purpose is only to write a texture into a quad that will fill up the screen, do I really need a renderbuffer?
I don't see why you need a framebuffer or a renderbuffer of any kind. A texture is a texture; just draw a textured quad.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With