I want to write a fragment shader for Three.js that needs a large array of 10000 integers. When I try to declare such array in shader's glsl code:
uniform int colorGrid[10000];
then the shader renderer throws
ERROR: too many uniform
What other choices do I have - how can I pass such a large chunk of data to the fragment shader?
Textures are large arrays. Passing integers in textures is a little harder but not impossible (for WebGL2 see below). Either you need to split your integer values across the red, green, blue, and alpha channels of a texture OR you FLOAT textures which will give you integer values up to 2^24th
To pack an integer into a texture you might do something like this
// assumes unsigned ints
setPixelFromInt(pixels, width, x, y, intValue) {
var r = (intValue >> 24) & 0xFF;
var g = (intValue >> 16) & 0xFF;
var b = (intValue >> 8) & 0xFF;
var a = (intValue >> 0) & 0xFF;
var offset = (y * width + x) * 4;
pixels[offset + 0] = r;
pixels[offset + 1] = g;
pixels[offset + 2] = b;
pixels[offset + 3] = a;
}
var width = 100;
var height = 100;
var pixels = new Uint8Array(width * height * 4);
...
To get your values back out in the shader do something like this?
uniform vec2 textureDimensions;
uniform sampler2D arrayTexture;
int getValueFromTexture(sampler2D arrayTexture, vec2 textureDimensions, int index) {
float x = mod(float(index), textureDimensions.x);
float y = floor(float(index) / textureDimensions.x);
vec2 uv = (vec2(x, y) + .5) / textureDimensions;
vec4 color = texture2D(arrayTexture, uv);
return int(color.r * 256.0 * 256.0 * 256.0 +
color.b * 256.0 * 256.0 +
color.g * 256.0 +
color.a);
}
Be sure to set filtering to gl.NEAREST
Note: I didn't actually run that code but it illustrates the idea
In WebGL2 you can have integer textures of 8, 16 or 32 bit and in the shader there is the texelFetch
function which will pull out the value of a specific texel of a specific lod with no filtering so. There is also a textureSize
function so you don't have to manually pass the texture size in a uniform.
const int lod = 0
ivec2 textureDimensions = textureSize(arrayTexture, lod);
int x = index % textureDimensions.x;
int y = index / textureDimensions.x;
ivec4 color = texelFetch(arrayTexture, ivec2(x,y), lod);
@gman answer is helpful, however some errors always lie in untested code ;). In getValueFromTexture
he used rbga
instead of rgba
in the return statement. texelFetch
return vec4
not ivec4
.
(I am using #version es 300
webGL2).
What is missing from his answer, is the THREE.js
part, that is quite confusing and took me 4h+ to find out.
When googling you might see usage of THREE.ImageUtils.generateDataTexture()
, but it has been removed:
Create texture from Array THREE.js
What happened to THREE.ImageUtils?
I also made some change in my version, because I need a 2d array in my shader (accessed with x and y).
Typescript version of @gman function:
export type uint = number;
/**
* Based on: https://stackoverflow.com/questions/31276114/three-js-large-array-of-int-as-uniform
*/
function packIntegers(width: uint, height: uint, unsignedIntegers: Readonly<uint[][]>): Uint8Array {
const pixels = new Uint8Array(width * height * 4);
// assumes unsigned ints
function setPixelFromInt(x: uint, y: uint, intValue: uint) {
const r = (intValue >> 24) & 0xFF;
const g = (intValue >> 16) & 0xFF;
const b = (intValue >> 8) & 0xFF;
const a = (intValue >> 0) & 0xFF;
const offset = (y * width + x) * 4;
// noinspection PointlessArithmeticExpressionJS
pixels[offset + 0] = r;
pixels[offset + 1] = g;
pixels[offset + 2] = b;
pixels[offset + 3] = a;
}
for (let x = 0; x < unsignedIntegers.length; x++) {
for (let y = 0; y < unsignedIntegers[x].length; y++) {
setPixelFromInt(x, y, unsignedIntegers[x][y]);
}
}
return pixels;
}
Creating the THREE.DataTexture
you will pass as uniform (don't make the mistake like me to try to pass the Uint8Array in a sampler2D uniform). https://threejs.org/docs/#api/en/textures/DataTexture
function getMandala2DInfo(pattern: TrianglePattern): THREE.DataTexture {
// Your Uint8Array made with `packIntegers` function. `getTextureArray` call `packIntegers` in my code.
const data: Uint8Array = getTextureArray(pattern.slice().map(e => e.slice().map(e => e-1)).reverse());
// should be the width in pixels of your texture
const width = pattern[0].length;
// should be the height in pixels of your texture
const height = pattern.length;
const texture = new THREE.DataTexture(
// a quick way to convert your Uint8Array, but it is better to start with a Float32Array
new Float32Array(Array.from(data)), // important
width,
height,
THREE.RGBAFormat,
// UnsignedByteType is default... Refer to doc (not typing) to know what are defaults.
THREE.FloatType, // important
);
return texture;
}
Add the uniform to your material.
const mat = new THREE.ShaderMaterial({
uniforms: {
// [...]
mandalaInfo2D: {
type: "t",
value: getMandala2DInfo(props.pattern)
}
} as any,
// [...]
});
You may update the uniform like that later on:
mat.uniforms.mandalaInfo2D.value = getMandala2DInfo(props.pattern);
The shader code, be sure to have correct order rgba.
uniform sampler2D mandalaInfo2D;
const int lod = 0;
int getValueFromTexture(in sampler2D arrayTexture2, in ivec2 index)
{
vec4 color = texelFetch(arrayTexture2, index, lod);
return int(color.r * 256.0 * 256.0 * 256.0 +
color.g * 256.0 * 256.0 +
color.b * 256.0 +
color.a);
}
Now that you have the code, you should wonder why we are using a Float32Array
. What gave me the answer I was looking for is https://dev.to/nicolasrannou/create-textures-from-data-in-threejs-5bap#what-is-confusing--me--
All the integer textures (including UnsignedByteType) are normalized automatically while uploaded to the shaders, whereas the floating/integral textures (including Float and HalfFloat) are passed as it is.
This is also important to read, but not enough to find the issue by itself https://www.khronos.org/opengl/wiki/Sampler_(GLSL) :
Texture coordinates may be normalized or in texel space. A normalized texture coordinate means that the size of the texture maps to the coordinates on the range [0, 1] in each dimension. This allows the texture coordinate to be independent of any particular texture's size. A texel-space texture coordinate means that the coordinates are on the range [0, size], where size is the size of the texture in that dimension.
Rectangle Textures always take texture coordinates in texel space. Unless otherwise noted, all other texture coordinates will be normalized.
The THREE.js
doc is silent on this behavior: https://threejs.org/docs/#api/en/constants/Textures
For use with a texture's type property, which must correspond to the correct format. See below for details.
UnsignedByteType is the default.
Thumbs up Nicolas on dev.to, because when being still new to Three.js and glsl, it is gruelling to understand what is going wrong (and where). (make me wonder if I shouldn't use something lighter than Three.js to render shaders).
Edit: you might also see usage of texture.needsUpdate = true;
when searching, I did not need it. I have also seen Prevent DataTexture value normalization in THREE, maybe something else than THREE.RGBAFormat in conjunction with usampler2d
might also give correct result (I haven't tested).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With