I believe "instancing" as described here provides a way to have one attribute for all the vertices/indicies of a, for example, 200 vertex model:
http://blog.tojicode.com/2013/07/webgl-instancing-with.html
In other words, this gives a way to have just one translation or orientation attribute array which would be applied to all 200 vertices of the model. And thus "instancing" a scene of 10K of these models would require only 10K attributes, not 2000K.
Apparently Three's InstancedBufferGeometry & InstancedBufferAttribute objects provide this but I haven't found documentation, other than sparse descriptions of the objects. I believe they use a ShaderMaterial, which is fine, although there may a way other than using GLSL in "vanilla" Three.js.
Could someone explain how they work and how to use them in Three.js?
This class is an efficient alternative to Geometry, because it stores all data, including vertex positions, face indices, normals, colors, UVs, and custom attributes within buffers; this reduces the cost of passing all this data to the GPU.
I stumbled upon your question when seeking this answer myself. Here are just two examples (directly from threejs.org/examples) which use instancing:
A brief explanation:
The main difference between THREE.InstancedBufferGeometry
and THREE.BufferGeometry
is that the former can use special attributes (THREE.InstancedBufferAttributes
) which will be used for each instance.
Imagine you're creating a box, for which you want to have several instances. The vertex, normal, and UV buffers would all be standard THREE.BufferAttribute
objects because they describe the base shape. But in order to move each instance to its own position, you need to define a THREE.InstancedBufferAttribute
to hold the locations (the examples usually name this attribute "offset
").
The number of vertex references in your THREE.InstancedBufferAttributes
describes how many instances you'll have. For example, putting 9 values in offset
indicates there will be 3 instances (this includes the original shape). You can also control how many of these are drawn by setting the THREE.InstancedBuferGeometry.maxInstancedCount
value.
Finally, you will need a shader to help control the instanced attributes.
Small Example:
var cubeGeo = new THREE.InstancedBufferGeometry().copy(new THREE.BoxBufferGeometry(10, 10, 10));
//cubeGeo.maxInstancedCount = 8;
cubeGeo.addAttribute("cubePos", new THREE.InstancedBufferAttribute(new Float32Array([
25, 25, 25,
25, 25, -25, -25, 25, 25, -25, 25, -25,
25, -25, 25,
25, -25, -25, -25, -25, 25, -25, -25, -25
]), 3, 1));
var vertexShader = [
"precision highp float;",
"",
"uniform mat4 modelViewMatrix;",
"uniform mat4 projectionMatrix;",
"",
"attribute vec3 position;",
"attribute vec3 cubePos;",
"",
"void main() {",
"",
" gl_Position = projectionMatrix * modelViewMatrix * vec4( cubePos + position, 1.0 );",
"",
"}"
].join("\n");
var fragmentShader = [
"precision highp float;",
"",
"void main() {",
"",
" gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);",
"",
"}"
].join("\n");
var mat = new THREE.RawShaderMaterial({
uniforms: {},
vertexShader: vertexShader,
fragmentShader: fragmentShader,
side: THREE.DoubleSide,
transparent: false
});
var mesh = new THREE.Mesh(cubeGeo, mat);
scene.add(mesh);
html * {
padding: 0;
margin: 0;
width: 100%;
overflow: hidden;
}
#host {
width: 100%;
height: 100%;
}
<script src="https://threejs.org/build/three.js"></script>
<script src="https://threejs.org/examples/js/controls/TrackballControls.js"></script>
<script src="https://threejs.org/examples/js/libs/stats.min.js"></script>
<div id="host"></div>
<script>
var WIDTH = window.innerWidth,
HEIGHT = window.innerHeight,
FOV = 35,
NEAR = 1,
FAR = 1000;
var renderer = new THREE.WebGLRenderer({
antialias: true
});
renderer.setSize(WIDTH, HEIGHT);
document.getElementById('host').appendChild(renderer.domElement);
var stats = new Stats();
stats.domElement.style.position = 'absolute';
stats.domElement.style.top = '0';
document.body.appendChild(stats.domElement);
var camera = new THREE.PerspectiveCamera(FOV, WIDTH / HEIGHT, NEAR, FAR);
camera.position.z = 250;
var trackballControl = new THREE.TrackballControls(camera, renderer.domElement);
trackballControl.rotateSpeed = 5.0; // need to speed it up a little
var scene = new THREE.Scene();
var light = new THREE.PointLight(0xffffff, 1, Infinity);
camera.add(light);
scene.add(light);
function render() {
if (typeof updateVertices !== "undefined") {
updateVertices();
}
renderer.render(scene, camera);
stats.update();
}
function animate() {
requestAnimationFrame(animate);
trackballControl.update();
render();
}
animate();
</script>
The question is a bit confusing but ill try my best.
I think you are confusing some things and i see this a lot among three.js users. First of all, i think the attributes terminology is wrong. You are not creating thousands of thousands of attributes, rather, an entity such as a mesh, may have one attribute - position
, or multiple uv
, normal
, aMyAttribute
etc.
In fact, there is a maximum number of attributes webgl can address, it differs but the ball park is 16, not thousands.
An attribute can contain data to define 200 "vertices" but this is also relative, they could have 2 components, could have 4 components.
When you make multiple "objects" out of your "200 vertex model" you're not multiplying your geometry. The attributes count doesnt increase, i'm actually not sure what happens with the uniforms, but as far as the gpu is concerned, it still holds the 200 vertices. Javascript still contains some number of attribute and uniform locations, and one instance of some Geometry
class.
What you do multiply are "nodes/objects", in javascript you will have multiple say Mesh
objects. This will hold various other types, matrices, vectors, quaternions etc.
When you call the render function, the renderer will issue a draw call for each of these nodes. Each time it does that, it needs to set the state of webgl to deal with that particular draw call. If this is the same object scattered around your scene, the only thing that differs are uniforms for position/rotation/scale. If they have materials, then it could be a uniform for a color, or a different texture. Either way this produces overhead, and slows things down.
Lets say your model is a tree. Creating a forest out of it and rendering a forest
, instead of many trees
would eliminate this overhead, theres still a same amount of shader processing going on. Each vertex of each tree needs to have a shader run on it.
This of course would increase the data your attributes hold. Now instead of holding 200 vertices, in some convenient object space ( better call it "tree space" ), it needs to hold 200 x N vertices in "forest space". Ie. vertex of tree 0
exists somewhere in the forest, and the same vertex of tree N
exists somewhere else in the forest. This would happen if you were to create a new geometry for each tree, bake the transformation into its vertices, then merge with another one and so on.
Instancing allows you to be smarter about this case. Instead of holding all those individual vertices, that share a common property (it's the same tree), you can hold the 200 vertices of the original tree, and an attribute describing where they are to be drawn N times. So, rather than merging geometries, and baking transformations on them individually, you would construct an attribute containing just the transformation.
You most likely cant use this with vanilla materials because they won't know what to do with your custom attribute. However, with how three.js handles shaders it's not too hard to inject some logic and extend the existing materials.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With