Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Exporting Three.js scene to STL keeping animations intact

I have a Three.js scene rendered and I would like to export how it looks after the animations have rendered. For example, after the animation has gone ~100 frames, the user hits export and the scene should be exported to STL just as it is at that moment.

From what I've tried (using STLExporter.js, that is), it seems to export the model using the initial positions only.

If there's already a way to do this, or a straightforward work around, I would appreciate a nudge in that direction.

Update: After a bit more digging into the internals, I've figured out (at least superficially) why STLExporter did not work. STLExporter finds all objects and asks them for the vertices and faces of the Geometry object. My model has a bunch of bones that are skinned. During the animation step, the bones get updated, but these updates does not propagate to the original Geometry object. I know these transformed vertices are being calculated and exist somewhere (they get displayed on the canvas).

The question is where are these transformed vertices and faces stored and can I access them to export them as an STL?

like image 344
KevinL Avatar asked Dec 31 '14 05:12

KevinL


2 Answers

The question is where are these transformed vertices and faces stored and can I access them to export them as an STL?

The answer to this, unfortunately, is nowhere. These are all computed on the GPU through calls to WebGL functions by passing in several large arrays.

To explain how to calculate this, let's first review how animation works, using this knight example for reference. The SkinnedMesh object contains, among other things, a skeleton (made of many Bones) and a bunch of vertices. They start out arranged in what's known as a bind pose. Each vertex is bound to 0-4 bones and if those bones move, the vertexes will move, creating animation.

Bind image

If you were to take our knight example, pause the animation mid-swing, and try the standard STL exporter, the STL file generated would be exactly this pose, not the animated one. Why? Because it simply looks at mesh.geometry.vertices, which are not changed from the original bind pose during animation. Only the bones experience change and the GPU does some math to move the vertices corresponding to each bone.

That math to move each vertex is pretty straight forward - transform the bind-pose vertex position into bone-space and then from bone-space to global-space before exporting.
Adapting the code from here, we add this to the original exporter:

vector.copy( vertices[ vertexIndex ] );
boneIndices = [];   //which bones we need
boneIndices[0] = mesh.geometry.skinIndices[vertexIndex].x;
boneIndices[1] = mesh.geometry.skinIndices[vertexIndex].y;
boneIndices[2] = mesh.geometry.skinIndices[vertexIndex].z;
boneIndices[3] = mesh.geometry.skinIndices[vertexIndex].w;

weights = [];   //some bones impact the vertex more than others
weights[0] = mesh.geometry.skinWeights[vertexIndex].x;
weights[1] = mesh.geometry.skinWeights[vertexIndex].y;
weights[2] = mesh.geometry.skinWeights[vertexIndex].z;
weights[3] = mesh.geometry.skinWeights[vertexIndex].w;

inverses = [];  //boneInverses are the transform from bind-pose to some "bone space"
inverses[0] = mesh.skeleton.boneInverses[ boneIndices[0] ];
inverses[1] = mesh.skeleton.boneInverses[ boneIndices[1] ];
inverses[2] = mesh.skeleton.boneInverses[ boneIndices[2] ];
inverses[3] = mesh.skeleton.boneInverses[ boneIndices[3] ];

skinMatrices = [];  //each bone's matrix world is the transform from "bone space" to the "global space"
skinMatrices[0] = mesh.skeleton.bones[ boneIndices[0] ].matrixWorld;
skinMatrices[1] = mesh.skeleton.bones[ boneIndices[1] ].matrixWorld;
skinMatrices[2] = mesh.skeleton.bones[ boneIndices[2] ].matrixWorld;
skinMatrices[3] = mesh.skeleton.bones[ boneIndices[3] ].matrixWorld;

var finalVector = new THREE.Vector4();
for(var k = 0; k<4; k++) {
    var tempVector = new THREE.Vector4(vector.x, vector.y, vector.z);
    //weight the transformation
    tempVector.multiplyScalar(weights[k]);
    //the inverse takes the vector into local bone space
    tempVector.applyMatrix4(inverses[k])
    //which is then transformed to the appropriate world space
    .applyMatrix4(skinMatrices[k]);
    finalVector.add(tempVector);
}

output += '\t\t\tvertex ' + finalVector.x + ' ' + finalVector.y + ' ' + finalVector.z + '\n';

This yields STL files that look like:

enter image description here

The full code is available at https://gist.github.com/kjlubick/fb6ba9c51df63ba0951f

like image 118
KevinL Avatar answered Nov 14 '22 22:11

KevinL


After a week of pulling my hair out I managed to modify the code to include morphTarget data in the final stl file. you can find the modified code to Kevin's change at https://gist.github.com/jcarletto27/e271bbb7639c4bed2427

As JS is not my favored language, it's not pretty but, it manages to work without much fuss. Hopefully someone gets some use out of this besides me!

like image 31
Jcarletto27 Avatar answered Nov 14 '22 23:11

Jcarletto27