Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Uploading large images to the GPU in webgl

How can I upload large images to the GPU using WebGL without freezing up the browser (think of high-res skyboxes or texture atlases)?

I thought at first to seek if there's a way to make texImage2D do its thing asynchronously (uploading images to the GPU is IO-ish, right?), but I could not find any way. I then tried using texSubImage2D to upload small chunks that fit in a 16 ms time window (I'm aiming for 60 fps). But texSubImage2D takes an offset AND width/height parameter only if you pass in an ArrayBufferView - when passing in Image objects you can only specify the offset and it will (I'm guessing) upload the whole image. I imagine painting the image to a canvas first (to get it as a buffer) is just as slow as uploading the whole thing to the GPU.

Here's a minimal example of what I mean: http://jsfiddle.net/2v63f/3/. I takes ~130 ms to upload this image to the GPU.

Exact same code as on jsfiddle:

var canvas = document.getElementById('can');
var gl = canvas.getContext('webgl');

var image = new Image();
image.crossOrigin = "anonymous";
//image.src = 'http://i.imgur.com/9Tq28Qj.jpg?1';
image.src = 'http://i.imgur.com/G0qL97y.jpg'
image.addEventListener('load', function () {
    var texture = gl.createTexture();
    gl.bindTexture(gl.TEXTURE_2D, texture);

    var now = performance.now();
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
    console.log(performance.now() - now);
});
like image 825
adrianton3 Avatar asked Aug 03 '14 15:08

adrianton3


1 Answers

The referenced image appears to be 3840x1920

For a high-res skybox you can usually discard the need for an alpha channel, and then decide if some other pixel format structure can provide a justifiable trade off between quality and data-size.

The specified RGBA encoding means this will require a 29,491,200 byte data transfer post-image-decode. I attempted to test with RGB, discarding the alpha but saw a similar 111 ms transfer time. Assuming you can pre-process the images prior to your usage, and care only about the time measured for data-transfer time to GPU, you can preform some form of lossy encoding or compression on the data to decrease the amount of bandwidth you need to transfer.

One of the more trivial encoding methods would be to half your data size and send the data to the chip using RGB 565. This will decrease your data size to 14,745,600 bytes at a cost of color range.

var buff = new Uint16Array(image.width*image.height);
//encode image to buffer
//on texture load-time, perform the following call
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, image.width, image.height, 0, gl.RGB, gl.UNSIGNED_SHORT_5_6_5, buff);

Assuming you can confirm support for the S3TC extension (https://developer.mozilla.org/en-US/docs/Web/WebGL/Using_Extensions#WEBGL_compressed_texture_s3tc), you could also store and download the texture in DXT1 format and decrease the memory transfer requirement down to 3,686,400 bytes. It appears any form of S3TC will result in the same RGB565 color-range reduction.

var ext = (
  gl.getExtension("WEBGL_compressed_texture_s3tc") ||
  gl.getExtension("MOZ_WEBGL_compressed_texture_s3tc") ||
  gl.getExtension("WEBKIT_WEBGL_compressed_texture_s3tc")
);

gl.compressedTexImage2D(gl.TEXTURE_2D, 0, ext.COMPRESSED_RGB_S3TC_DXT1_EXT, image.width, image.height, 0, buff);

Most block compressions offer a decreased image quality up close, but allow for richer imagery at a distance. Should you need high resolution up close, a lower resolution or compressed texture could be loaded initially to be used as a poor LOD distant-view, and a smaller subset of the higher resolution could be loaded as on approaching the texture in question.

Within my trivial tests at http://jsfiddle.net/zy8hkby3/2/ , the texture-payload-size reductions can cause exponentially decreasing time requirements for the data-transfer, but this is most likely GPU dependent.

like image 54
phoenixillusion Avatar answered Oct 22 '22 01:10

phoenixillusion