Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Packing data in WebGL: float64/int64Arrays in Chrome

Tags:

webkit

webgl

[Edit - one problem almost fixed, changed question to reflect it]

I'm working on a point cloud webGL project in Chrome which displays millions of points at a time.

To make it more efficient, I've tried packing my data - the 6 xyz and rgb floats - into two 64bit integers (xy & zrgb), and was planning to unpack them in the shader.

I'm developing in Chrome, and afaict, webkit doesn't support any sort of 64bit array type... even using Canary. Also afaict, Firefox does support 64bit Arrays, but I still get an error.

The problems occur with this line:

gl.bufferData(gl.ARRAY_BUFFER, new Float64Array(data.xy), gl.DYNAMIC_DRAW);

In Chrome I get ArrayBufferView not a small enough positive integer, in FF I get 'invalid arguments'.

So my questions are, is there any way to send 64bit numbers to the shader, preferably using Chrome, if not, in FF?

Also, is packing data like this a good idea? Any tips?!

Thanks,

John

like image 484
Fridge Avatar asked Mar 06 '12 12:03

Fridge


1 Answers

It's important to know that WebGL actually doesn't care in the slightest about the format of the TypedArray that you provide it. No matter what you give it it will treat it as an opaque binary buffer. What matters is how you set up your vertexAttribPointers. This allows for some highly convenient methods of shuffling data back and forth. For example: I regularly read a Uint8Array from a binary file and provide it as buffer data, but bind it as floats and ints.

TypedArrays also have a wonderful ability to act as views into other array types, which makes it easy to mix types (as long as you don't have alignment issues). In your specific case, I would propose doing something like this:

var floatBuffer = new Float32Array(verts.length * 4);
var byteBuffer = new Uint8Array(floatBuffer); // View the floatBuffer as bytes

for(i = 0; i < verts.length; ++i) {
    floatBuffer[i * 4 + 0] = verts.x;
    floatBuffer[i * 4 + 1] = verts.y;
    floatBuffer[i * 4 + 2] = verts.z;

    // RGBA values expected as 0-255
    byteBuffer[i * 16 + 12] = verts.r;
    byteBuffer[i * 16 + 13] = verts.g;
    byteBuffer[i * 16 + 14] = verts.b;
    byteBuffer[i * 16 + 15] = verts.a;
}

var vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, floatBuffer, gl.STATIC_DRAW);

That will upload a tightly-packed vertex buffer containing 3 32 bit floats and 1 32 bit color to the GPU. Not quite as small as your proposed pair of 64 bit ints, but the GPU will likely work with it better. When binding it for rendering later on, you would do so like this:

gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.vertexAttribPointer(attributes.aPosition, 3, gl.FLOAT, false, 16, 0);
gl.vertexAttribPointer(attributes.aColor, 4, gl.UNSIGNED_BYTE, false, 16, 12);

With the corresponding shader code looking like this:

attribute vec3 aPosition;
attribute vec4 aColor;

void main() {
    // Manipulate the position and color as needed
}

This way you have the benefits of using interleaved arrays, which the GPU likes to work with, and only have to track a single buffer (bonus!) plus you're not wasting space using a full float for each color component. If you REALLY want to get small, you could use shorts instead of floats for the positions, but my experience in the past with that has shown that desktop GPUs aren't terribly fast when using short attributes.

Hopefully that helps!

like image 62
Toji Avatar answered Nov 08 '22 12:11

Toji