Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does a GLSL sampler determine the minification, and thus the mipmap level, of a texture?

I am working with OpenGL ES (via WebGL), but I think this question is applicable to the full OpenGL profile as well.

Suppose I create an OpenGL texture with full mipmap levels, and I set its TEXTURE_MIN_FILTER to NEAREST_MIPMAP_NEAREST. Also suppose that I have a fragment shader that samples this texture. The mipmap level is chosen based on the degree of minification of the texture, but how is the degree of minification chosen?

In my case, I am synthesizing (inside the shader) the texture coordinates that I use to sample my texture. In fact, my texture coordinates are not based on any incoming varyings. Even though I have mipmapping enabled on this texture, it doesn't seem to have any effect. Is this expected? Do I need to compute the LOD myself and use the bias parameter to texture2D? (there is no texture2DLOD, since I'm using ES)

like image 680
Daniel Yankowsky Avatar asked Sep 09 '11 21:09

Daniel Yankowsky


People also ask

What is a sampler GLSL?

A sampler is a set of GLSL variable types. Variables of one of the sampler types must be uniforms or as function parameters. Each sampler in a program represents a single texture of a particular texture type. The type of the sampler corresponds to the type of the texture that can be used by that sampler.

What is a Mipmap level?

A mipmap is a sequence of textures, each of which is a progressively lower resolution representation of the same image. The height and width of each image, or level, in the mipmap is a power of two smaller than the previous level. Mipmaps do not have to be square.

What is a texture sampler?

Texture sampling is the process of reading textures through the GPU. Graphics Hardware embeds a set of texture units that are able to read texture pixels directly or sample these textures using different algorithms.

How do OpenGL textures work?

A texture is an OpenGL Object that contains one or more images that all have the same image format. A texture can be used in two ways: it can be the source of a texture access from a Shader, or it can be used as a render target.


2 Answers

Blocks of adjacent pixels are computed in parallel. (IIRC the PowerVR chips do a 4x4 block at a time, for example.) When you call texture2D in your fragment shader, the sampler is fetching all 16 samples for all 16 pixels at once, and so has adjacency information needed to calculate the minification level. This is part of why it's so important for adjacent pixels to sample from nearby areas of the texture.

Note that this only applies to fragment shaders. In vertex shaders the first mipmap level is always used, (unless you use the Lod version of texture2D is used.)

like image 82
Matthew Marshall Avatar answered Sep 21 '22 13:09

Matthew Marshall


You are allowed to compute texture coordinates arbitrarily, and the shader will act accordingly... within one restriction. Your computations cannot involve any conditional logic. They can involve varyings, uniforms, constants, values sampled from other textures, whatever you want. But the moment you slip so much as a ?: operator in there (let alone an if-statement), you're in trouble.

And since you're in OpenGL ES land, you don't really have the tools to get yourself out of that trouble. Desktop GL 3.0 gives you the textureGrad set of functions, which allows you to compute gradients before reaching the conditional logic. But without that, there isn't much you can do.

like image 32
Nicol Bolas Avatar answered Sep 18 '22 13:09

Nicol Bolas