Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Difference between format and internalformat

Tags:

opengl

I did search and read stuff about this but couldn't understand it.

What's the difference between a texture internal format and format in a call like

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 32, 32, 0, GL_RGBA, GL_UNSIGNED_BYTE, data); 

?

Let's assume that data is an array of 32 x 32 pixel values where there are four bytes per each pixel (unsigned char data 0-255) for red, green, blue and alpha.

What's the difference between the first GL_RGBA and the second one? Why is GL_RGBA_INTEGER invalid in this context?

like image 940
Dean Avatar asked Dec 28 '15 16:12

Dean


People also ask

What is OpenGL pixel format?

A pixel format specifies several properties of an OpenGL drawing surface. Some of the properties specified by a pixel format are: Whether the pixel buffer is single- or double-buffered. Whether the pixel data is in RGBA or color-index form. The number of bits used to store color data.

What is Gl_depth_component?

GL_DEPTH_COMPONENT. Each element is a single depth value. The GL converts it to floating point and clamps to the range [0,1]. GL_DEPTH_STENCIL. Each element is a pair of depth and stencil values.

What is Gl_rgba?

GL_RGBA. Red, green, blue, and alpha values (RGBA)

What is a floating point texture?

Basically floating point texture is a texture in which data is of floating point type :) That is it is not clamped. So if you have 3.14f in your texture you will read the same value in the shader. You may create them with different numbers of channels. Also you may crate 16 or 32 bit textures depending on the format.


2 Answers

The format (7th argument), together with the type argument, describes the data you pass in as the last argument. So the format/type combination defines the memory layout of the data you pass in.

internalFormat (2nd argument) defines the format that OpenGL should use to store the data internally.

Often times, the two will be very similar. And in fact, it is beneficial to make the two formats directly compatible. Otherwise there will be a conversion while loading the data, which can hurt performance. Full OpenGL allows combinations that require conversions, while OpenGL ES limits the supported combinations so that conversions are not needed in most cases.

The reason GL_RGBA_INTEGER is not legal in this case that there are rules about which conversions between format and internalFormat are supported. In this case, GL_RGBA for the internalFormat specifies a normalized format, while GL_RGBA_INTEGER for format specifies that the input consists of values that should be used as integers. There is no conversion defined between these two.

While GL_RGBA for internalFormat is still supported for backwards compatibility, sized types are generally used for internalFormat in modern versions of OpenGL. For example, if you want to store the data as an 8-bit per component RGBA image, the value for internalFormat is GL_RGBA8.

Frankly, I think there would be cleaner ways of defining these APIs. But this is just the way it works. Partly it evolved this way to maintain backwards compatibility to OpenGL versions where features were much more limited. Newer versions of OpenGL add the glTexStorage*() entry points, which make some of this nicer because it separates the internal data allocation and the specification of the data.

like image 107
Reto Koradi Avatar answered Oct 04 '22 10:10

Reto Koradi


The internal format describes how the texture shall be stored in the GPU. The format describes how the format of your pixel data in client memory (together with the type parameter).

Note that the internal format does specify both the number of channels (1 to 4) as well as the data type, while for the pixel data in client memory, both are specified via two separate parameters.

The GL will convert your pixel data to the internal format. If you want efficient texture uploads, you should use matching formats so that there is no conversion needed. But be aware that most GPUs store the texture data in BGRA order, this still is represented by the internal format GL_RBGA - the internal format only describes the number of channels and the data type, the internal layout is totally GPU-specific. However, that means that it is often recommended for maximum performance to use GL_BGRA as the format of your pixel data in client memory.

Let's assume that data is an array of 32 x 32 pixel values where there are four bytes per each pixel (unsigned char data 0-255) for red, green, blue and alpha. What's the difference between the first GL_RGBA and the second one?

The first, internalFormat tells the GL that it should store the texture as 4 channel (RGBA) with normalized integer in the preferred precision (8 bit per channel). The second one, format tells the Gl that you are providing 4 channels per pixel in the R,G,B,A order.

You could for example supply the data as 3-channel RGB data and the GL would automatically extend this to RGBA (with setting A to 1) if the internal format is left at RGBA. You also could supply only the Red channel.

The other way around, if you use GL_RED as internalFormat, the GL would ignore the GB and A channel in your input data.

Also note that the data types will be converted. If you provide a pixel RGB with 32 bit float per channel, you could use GL_FLOAT. However, when you still use the GL_RGBA internal format, the GL will convert these to normalized integers with 8 bpit per channel, so the extra precision is lost. If you want the GL to use the floating point precision, you would also have to use a floating point texture format like GL_RGBA32F.

Why is GL_RGBA_INTEGER invalid in this context?

the _INTEGER formats are for unnormalized integer textures. There is no automatic conversion for integer textures in the GL. You have to use an integer internal format, AND you have to specify your pixel data with some _INTEGER format, otherwise it will result in an error.

like image 39
derhass Avatar answered Oct 04 '22 11:10

derhass