I'm currently creating a 3D tiled hex board in Three.js. For artistic and functional reasons, the tiles are each their own mesh comprised of a basic (unchanging) geometry with a generated array of maps in its material: Displacement, Diffuse, Normal.
I started noticing a reduction in FPS as I added more texture maps, which prompted me to look into the source. I have a 15x15 game board, meaning there are 225 individual meshed being rendered every frame. Each mesh, at the time, was comprised of 215 faces due to poor design, resulting in 48,375 faces in the scene.
Thinking it would cure the performance troubles, I redesigned the mesh to contain only 30 faces, totaling 6,750 faces across the scene; An astounding improvement. I was disappointed to find that an 86% reduction in faces had almost no effect on performance.
So, I resolved to find exactly what was causing the drop in performance. I set up an abstracted test environment, and used a grid of planes, 3x10 (To give them 30 faces, just like my own model). I tried out different grid sizes (Mesh counts) and applied materials of differing complexity. Here's what I found:
// /---------------------------------------------------\
// | Material | 15x15 | 20x20 | 25x25 |
// |---------------------|---------|---------|---------|------\
// | Flat Lambert Color | 60FPS | 48FPS | 30FPS | -00% |
// | Lambert Diffuse | 57FPS | 41FPS | 27FPS | -10% |
// | Blank Shader | 51FPS | 37FPS | 24FPS | -20% |
// | Full Shader (-H) | 49FPS | 32FPS | 21FPS | -30% |
// | Full Shader (+H) | 42FPS | 28FPS | 19FPS | -37% |
// \----------------------------------------------------------/
// | -00% | -33% | -55% |
// \-----------------------------/
MeshLambertMaterial({color})
was my baselineMeshLambertMaterial({map})
suffered roughly a 10% performance hitShaderMaterial()
using Default settings suffered roughly a 20% performance hitShaderMaterial()
using a Diffuse Map suffered roughly a 30% performance hitShaderMaterial()
using a Diffuse+Normal+Displacement maps suffered a 37% performance hitSo I learned that there was a significant hit coming from the shaders I'm using and maps I'm applying. However, there's a much larger hit coming from the amount of "things". I wasn't sure if this was faces, meshes or otherwise so I ran another test. Using my baseline material (MeshLambertMaterial({ color: red })
), I decided to test two variables: Number of sides and number of meshes. Here's what I found:
// 15x15 (225) Meshes @ 30 Faces = 6,750 Faces = 60 FPS
// 20x20 (400) Meshes @ 30 Faces = 12,000 Faces = 48 FPS
// 25x25 (625) Meshes @ 30 Faces = 18,750 Faces = 30 FPS
// 30x30 (900) Meshes @ 30 Faces = 27,000 Faces = 25 FPS
// 40x40 (1600) Meshes @ 30 Faces = 48,000 Faces = 15 FPS
// 50x50 (2500) Meshes @ 30 Faces = 75,000 Faces = 10 FPS
// 15x15 (225) Meshes @ 100 Faces = 22,500 Faces = 60 FPS
// 15x15 (225) Meshes @ 400 Faces = 90,000 Faces = 60 FPS
// 15x15 (225) Meshes @ 900 Faces = 202,500 Faces = 60 FPS
This seems to show quite conclusively that the quantity of faces involved do not affect the frame rate much, if at all. Rather, that the amount of individual meshes being draw to the scene create virtually all of the performance drag. I'm not sure what exactly causes such lag; I would imagine there is a large amount of overhead per mesh. Perhaps there is away to eliminate some of this overhead?
I have already considered merging my geometries. This does almost completely eliminate the drop in frame rate. However, as I stated in the beginning of this article I need each tile to be individually translatable, rotatable, scalable and otherwise modifiable. To my knowledge, this is not possible with merged geometries.
I have also considered defaulting to a merged geometry and recreating the geometries/scenes when a function that alters a tile is called. However, two problems exist with this approach:
I hope to find a solution that eliminates this performance hit rather than attempts to avoid it.
Which brings me to my question: Is there a more efficient way to render high quantities of individual meshes?
I have already considered merging my geometries. This does almost completely eliminate the drop in frame rate. However, as I stated in the beginning of this article I need each tile to be individually translatable, rotatable, scalable and otherwise modifiable. To my knowledge, this is not possible with merged geometries.
Sure it is. Add a vertex attribute which is an integer identifying the tile a vertex belongs to. Then, you can move tiles individually according to anything you can compute in your vertex shader.
If you need individual data for each tile, such as a transform, you can load it into a texture and use the tile index to lookup values from the texture — you could even arrange so that the texture looks like a (skewed) copy of your hex grid, for easy debugging!
For things like a “shake” effect, you don't even need a texture; just add a uniform variable giving the current time, and compute the shake in a way that's modified by the tile index.
I started noticing a reduction in FPS as I added more texture maps...
In general, you want to minimize the state changes for rendering. Changing things like textures or shaders requires sending new information to the GPU. That's not a cheap operation.
A "simple" thing you can try is to render your meshes sorted by material. I use "simple" because if you render your meshes in a tree traversal, you'll have to restructure your rendering code to render by material instead.
See this post from Christer Ericson for other various ways you can optimize your rendering to minimize state changes.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With