Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

sRGB textures. Is this correct?

Tags:

I've recently been reading a little about sRGB formats and how they allow the hardware to automatically perform colour correction for typical monitors. As part of my reading, I see that you can simulate this step with an ordinary texture and a pow function on the return result.

Anyway I want to ask two questions as I've never used this feature before. Firstly, can anyone confirm from my screenshot that this is what you would expect to see? The left picture is ordinary RGBA and the right picture is with an sRGB target. There is no ambient lighting in the scene and the model is bog standard Phong (the light is a spotlight).

Left shows RGBA, right shows SRGBA

The second question I would like to ask is at what point is the correction actually performed by the hardware? For example I am writing frames to an FBO, then later I'm rendering a screen-sized quad to the back buffer using an FBO colour buffer (I'm intending to switch to deferred shading soon). Should I use sRGB textures attached to the FBO, or do I only need to specify an sRGB texture as the back buffer target? If you're using sRGB, should ALL texture resources be sRGB?

like image 1000
Robinson Avatar asked Apr 27 '12 09:04

Robinson


People also ask

What is sRGB texture?

sRGB textures This effectively means all the pictures you create or edit are not in linear space, but in sRGB space e.g. doubling a dark-red color on your screen based on perceived brightness, does not equal double the red component.

Is sRGB gamma corrected?

As it turns out, colors on the Web are expressed in the sRGB colour space, which is gamma-corrected (also called gamma-compressed), which is a non linear space. As a result, any computation on Web colours is likely going to be off, as it assumes to operate in a linear space while it is not.

How do you convert sRGB to linear?

To decode a sRGB encoded color, raise the rgb values to the power of 2.2 . Once you have decoded the color, you are now free to add, subtract, multiply, and divide it. By raising the color values to the power of 2.2 , you're converting them from sRGB to RGB or linear color space.


1 Answers

Note: the following discussion assumes you understand what the sRGB colorspace is, what gamma correction is, what a linear RGB colorspace is, and so forth. This focuses primarily on the OpenGL implementation of the technology.

If you want an in-depth discussion of these subjects, I would suggest looking at my tutorials on HDR/Gamma correction (to understand linear colorspaces and gamma), as well the tutorial on sRGB images and how they handle gamma correction.

Firstly, can anyone confirm from my screenshot that this is what you would expect to see?

I'm not sure I understand what you mean by that question. If you apply proper gamma correction (which is what sRGB does more or less), you will generally get more detail in darker areas of the image and a "brighter" result.

However, the correct way to think about it is that until you do proper gamma correction all of your images have been wrong. Your images have been too dark, and the gamma correction is now making them the appropriate brightness. Every decision you've made about what colors things should be and how bright lights ought to be has been wrong.

The second question I would like to ask is at what point is the correction actually performed by the hardware?

This is a very different question than the "for example" part that you continue on with covers.

sRGB images (remember: a texture contains images, but framebuffers can have images too) can be used in the following contexts:

  • Transferring data from the user directly to the image (for example, with glTexSubImage2D and so forth). OpenGL assumes that you are providing data that is already in the sRGB colorspace. So there is no translation of the data when you upload it. This is done because it makes the most sense: generally, any image you get from an artist will be in the sRGB colorspace unless the artist took great pains to put it in some other colorspace. Virtually every image editor works directly in sRGB.

  • Reading values in shaders via samplers (ie: accessing a texture). This is quite simple as well. OpenGL knows that the texel data in the image is in the sRGB colorspace. OpenGL assumes that the shader wants linear RGB color data. Therefore, all attempts to sample from a texture with an sRGB image format will result in the sRGB->lRGB conversion. Which is free, btw.

    And on the plus side, if you've got GL 3.x+ capable hardware, you'll almost certainly get filtering done in the linear colorspace, where it makes sense. sRGB is a non-linear colorspace, so linear interpolation of sRGB values is always wrong.

  • Storing values output from the fragment shader to the framebuffer image(s). This is where it gets slightly complicated. Even if the framebuffer image you're rendering to is in the sRGB colorspace, that's not enough to force conversion. You must explicitly glEnable(GL_FRAMEBUFFER_SRGB); this tells OpenGL that the values you're writing from your fragment shader are linear colorspace values. Therefore, OpenGL needs to convert these to sRGB when storing them in the image

    Again, if you've got GL 3.x+ hardware, you'll almost certainly get blending in the linear colorspace. That is, OpenGL will read the sRGB value from the framebuffer, convert it to a linear RGB value, blend it with the incoming linear RGB value (the one you wrote from your shader), convert the blended value into the sRGB colorspace and store it. Again, that's what you want; blending in the sRGB colorspace is always bad.

Now that we understand that, let's look at your example.

For example I am writing frames to an FBO, then later I'm rendering a screen-sized quad to the back buffer using an FBO colour buffer (I'm intending to switch to deferred shading soon).

The problem with this is that you're not asking the right questions. What you need to keep in mind, especially as you move into deferred rendering, is this question:

Is this linear RGB or not?

In general, you should hold off on storing any intermediate data in gamma-correct space for as long as possible. So any intermediate buffers (ie: where you accumulate your lights) should not be sRGB.

This isn't about the cost of the conversion; it's really about what you're doing. If you're doing deferred rendering, then you're probably also doing HDR lighting and so forth. So your light accumulation buffer needs to be floating-point. And float buffers are always linear; there's no reason for them to not be linear.

Your final image, the default framebuffer, must be sRGB if you want to take advantage of free gamma correction (and you do). If you do all your work in HDR float buffers, and then tone-map the result down for the final display, you should write that to an sRGB image.

like image 167
Nicol Bolas Avatar answered Sep 20 '22 18:09

Nicol Bolas