Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

GLSL truncated signed distance representation (TSDF) implementation

I am looking forward to implement a model reconstruction of RGB-D images. Preferred on mobile phones. For that I read, it is all done with an TSDF-representation. I read a lot of papers now over hierarchical structures and other ideas to speed this up, but my problem is, that I still hat no clue how to actually implement this representation.

If I have a volume grid of size n, so n x n x n and I want to store in each voxel the signed distance, weight and color information. My only guess is, that I have to build a discrete set of points, for each voxel position. And with GLSL "paint" all these points and calculate the nearest distance. But that don't seem quite good or efficient to calculate this n^3 times.

How can I imagine to implement such a TSDF-representation?

The problem is, my only idea is to render the voxel grid to store in the data of signed distances. But for each depth map I have to render again all voxels and calculate all distances. Is there any way to render it the other way around?

So can't I render the points of the depth map and store informations in the voxel grid?

How is the actual state of art to render such a signed distance representation in an efficient way?

like image 931
Dominick Avatar asked Dec 14 '22 02:12

Dominick


1 Answers

After I posted my other answer I thought of another approach which seems to match the second part of your question but it definitely is not reconstruction and doesn't involve using a TSDF. It's actually a visualisation but it is a lot simpler :)

Each frame you get an RGB and a Depth image. Assuming that these images are registered, that is the pixel at (x,y) in the RGB image represents the same pixel as that at (x,y) in the depth image then you can create a dense point cloud coloured using the RGB data. To do this you would:

For every pixel in the depth map a) Use the camera's intrinsic matrix (K), the pixel coordinates and the depth value in the map at that point to project the point into a 3D point in camera coordinates b) Associate the RGB value at the same pixel with that point in space

So now you have an array of (probably 640x480) structures like {x,y,z,r,g,b}

You can render these using on GLES just by creating a set of vertices and rendering points. There's a discussion on how to do this here

With this approach you throw away the data every frame and redo from scratch. Importantly, you don't get a reconstructed surface, and you don't use a TSDF. You can get pretty results but it's not reconstruction.

like image 139
Dave Durbin Avatar answered Jan 19 '23 06:01

Dave Durbin