I'm trying to apply a THREE.ImageUtils.loadTextureCube() using the real time camera onto a spinning cube.
Until now, I managed to apply a simple texture using my video to a MeshLambertMaterial :
var geometry = new THREE.CubeGeometry(100, 100, 100, 10, 10, 10);
videoTexture = new THREE.Texture( Video ); // var "Video" is my <video> element
var material = new THREE.MeshLambertMaterial({ map: videoTexture });
Cube = new THREE.Mesh(geometry, material);
Scene.add( Cube );
That's OK and you can see the result at http://jmpp.fr/three-camera
Now I'd like to use this Video stream to have a brushed metal texture, so I tried to create another kind of material :
var videoSource = decodeURIComponent(Video.src);
var environment = THREE.ImageUtils.loadTextureCube([videoSource, // left
videoSource, // right
videoSource, // top
videoSource, // bottom
videoSource, // front
videoSource]); // back
var material = new THREE.MeshPhongMaterial({ envMap: environment });
... but it throws the following error :
blob:http://localhost/dad58cd1-1557-41dd-beed-dbfea4c340db 404 (Not Found)
I guess loadTextureCube() is trying to get the 6 array parameters as an image, but doesn't seems to appreciate a videoSource instead.
I'm beginning with three and wondered if there is a way to do that ?
Thx, jmpp
There are two ways I could see. First, if you just want the same image but with some specular highlights/shininess then just change
var material = new THREE.MeshLambertMaterial({ map:texture});
to
var material = new THREE.MeshPhongMaterial({
map: texture ,
ambient: 0x030303,
specular: 0xffffff,
shininess: 90
});
and play with the ambient, specular, shininess settings to find what you like.
Second, if you really want to add effects to the video image itself, you could draw the image to a canvas, manipulate the pixels, and then set the texture image to that new image. This could also be done with custom shaders, avoiding the canvas step, but there are already libraries for applying image filters to elements, so I'd stick with that. That would work something like this:
You would need a canvas to draw to <canvas id='testCanvas' width=256 height=256></canvas> Then with javascript
var ctx = document.getElementById('testCanvas').getContext('2d');
texture = new THREE.Texture();
// in the render loop
ctx.drawImage(Video,0,0);
var img = ctx.getImageData(0,0,c.width,c.height);
// do something with the img.data pixels, see
// this article http://www.html5rocks.com/en/tutorials/canvas/imagefilters/
// then write it back to the texture
texture.image = img;
texture.needsUpdate = true
Actually, you can do it as an envMap, you just need to force the video to be a power of 2 with same width/height. Videos stream in to chrome as 640x480, so you still need to draw a canvas, but only to crop/square the image. So I got this to work:
// In the access camera part
var canvas = document.createElement('canvas')
canvas.width = 512;
canvas.height = 512;
ctx = canvas.getContext('2d');
// In render loop
ctx.drawImage(Video,0,0, 512, 512);
img = ctx.getImageData(0,0,512,512);
// This part is a little different, but env maps have an array
// of images instead of just one
cubeVideo.image = [img,img,img,img,img,img];
if (Video.readyState === Video.HAVE_ENOUGH_DATA)
cubeVideo.needsUpdate = true;
Try this:
var environment = new THREE.Texture( [ Video, Video, Video, Video, Video, Video ] );
var material = new THREE.MeshPhongMaterial({ envMap: environment });
// in animate()
environment.needsUpdate = true;
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With