Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

THREE.JS - How To Detect Device Performance/Capability & Serve Scene Elements Accordingly

Tags:

three.js

I'd like to be able to implement conditionals within the scene set up to serve different meshes and materials or lower poly model imports. This aspect is simple but I'm looking for the best or most efficient (/best practice) method of detecting a systems capability for rendering three.js scenes?

For reference: An answer on this question ( How to check client perfomance for webgl(three.js) ) suggests plugins, which as stated check performance after scene creation and not before.

Additionally there is a nice method here ( Using javascript to detect device CPU/GPU performance? ) which involves measuring the speed of the render loop as a means of detecting performance, but is this the best solution we can come up with?

Thanks as always!

like image 753
Cult Digital Avatar asked Dec 03 '18 00:12

Cult Digital


1 Answers

Browsers don't afford a lot of information about the hardware they're running on so it's difficult determine how capable a device is ahead of time. With the WEBGL_debug_renderer_info extension you can (maybe) get at more details about the graphics hardware being used, but the values returned don't seem consistent and there's no guarantee that the extension will be available. Here's an example of the output:

ANGLE (Intel(R) HD Graphics 4600 Direct3D11 vs_5_0 ps_5_0)
ANGLE (NVIDIA GeForce GTX 770 Direct3D11 vs_5_0 ps_5_0)
Intel(R) HD Graphics 6000
AMD Radeon Pro 460 OpenGL Engine
ANGLE (Intel(R) HD Graphics 4600 Direct3D11 vs_5_0 ps_5_0)

I've created this gist that extracts and roughly parses that information:

function extractValue(reg, str) {
    const matches = str.match(reg);
    return matches && matches[0];
}

// WebGL Context Setup
const canvas = document.createElement('canvas');
const gl = canvas.getContext('webgl');
const debugInfo = gl.getExtension('WEBGL_debug_renderer_info');

const vendor = gl.getParameter(debugInfo.UNMASKED_VENDOR_WEBGL);
const renderer = gl.getParameter(debugInfo.UNMASKED_RENDERER_WEBGL);

// Full card description and webGL layer (if present)
const layer = extractValue(/(ANGLE)/g, renderer);
const card = extractValue(/((NVIDIA|AMD|Intel)[^\d]*[^\s]+)/, renderer);

const tokens = card.split(' ');
tokens.shift();

// Split the card description up into pieces
// with brand, manufacturer, card version
const manufacturer = extractValue(/(NVIDIA|AMD|Intel)/g, card);
const cardVersion = tokens.pop();
const brand = tokens.join(' ');
const integrated = manufacturer === 'Intel';

console.log({
  card,
  manufacturer,
  cardVersion,
  brand,
  integrated,
  vendor,
  renderer
});

Using that information (if it's available) along with other gl context information (like max texture size, available shader precision, etc) and other device information available through platform.js you might be able to develop a guess as to how powerful the current platform is.

I was looking into this exact problem not too long ago but ultimately it seemed difficult to make a good guess with so many different factors at play. Instead I opted to build a package that iteratively improves modifies the scene to improve performance, which could include loading or swapping out model levels of detail.

Hope that helps a least a little!

like image 148
Garrett Johnson Avatar answered Nov 08 '22 18:11

Garrett Johnson