Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to Display a 3D image when we have Depth and rgb Mat's in OpenCV (captured from Kinect)

We captured a 3d Image using Kinect with OpenNI Library and got the rgb and depth images in the form of OpenCV Mat using this code.

    main()
{
    OpenNI::initialize();
    puts( "Kinect initialization..." );
    Device device;
    if ( device.open( openni::ANY_DEVICE ) != 0 )
    {
        puts( "Kinect not found !" ); 
        return -1;
    }
    puts( "Kinect opened" );
    VideoStream depth, color;
    color.create( device, SENSOR_COLOR );
    color.start();
    puts( "Camera ok" );
    depth.create( device, SENSOR_DEPTH );
    depth.start();
    puts( "Depth sensor ok" );
    VideoMode paramvideo;
    paramvideo.setResolution( 640, 480 );
    paramvideo.setFps( 30 );
    paramvideo.setPixelFormat( PIXEL_FORMAT_DEPTH_100_UM );
    depth.setVideoMode( paramvideo );
    paramvideo.setPixelFormat( PIXEL_FORMAT_RGB888 );
    color.setVideoMode( paramvideo );
    puts( "Réglages des flux vidéos ok" );

    // If the depth/color synchronisation is not necessary, start is faster :
    //device.setDepthColorSyncEnabled( false );

    // Otherwise, the streams can be synchronized with a reception in the order of our choice :
    device.setDepthColorSyncEnabled( true );
    device.setImageRegistrationMode( openni::IMAGE_REGISTRATION_DEPTH_TO_COLOR );

    VideoStream** stream = new VideoStream*[2];
    stream[0] = &depth;
    stream[1] = &color;
    puts( "Kinect initialization completed" );


    if ( device.getSensorInfo( SENSOR_DEPTH ) != NULL )
    {
        VideoFrameRef depthFrame, colorFrame;
        cv::Mat colorcv( cv::Size( 640, 480 ), CV_8UC3, NULL );
        cv::Mat depthcv( cv::Size( 640, 480 ), CV_16UC1, NULL );
        cv::namedWindow( "RGB", CV_WINDOW_AUTOSIZE );
        cv::namedWindow( "Depth", CV_WINDOW_AUTOSIZE );

        int changedIndex;
        while( device.isValid() )
        {
            OpenNI::waitForAnyStream( stream, 2, &changedIndex );
            switch ( changedIndex )
            {
                case 0:
                    depth.readFrame( &depthFrame );

                    if ( depthFrame.isValid() )
                    {
                        depthcv.data = (uchar*) depthFrame.getData();
                        cv::imshow( "Depth", depthcv );
                    }
                    break;

                case 1:
                    color.readFrame( &colorFrame );

                    if ( colorFrame.isValid() )
                    {
                        colorcv.data = (uchar*) colorFrame.getData();
                        cv::cvtColor( colorcv, colorcv, CV_BGR2RGB );
                        cv::imshow( "RGB", colorcv );
                    }
                    break;

                default:
                    puts( "Error retrieving a stream" );
            }
            cv::waitKey( 1 );
        }

        cv::destroyWindow( "RGB" );
        cv::destroyWindow( "Depth" );
    }
    depth.stop();
    depth.destroy();
    color.stop();
    color.destroy();
    device.close();
    OpenNI::shutdown();
}

We added some code to above and got the RGB and depth Mat from that and the we processed RGB using OpenCV.

Now we need to display that image in 3D.

We are using :-

1) Windows 8 x64

2) Visual Studio 2012 x64

3) OpenCV 2.4.10

4) OpenNI 2.2.0.33

5) Kinect1

6) Kinect SDK 1.8.0

Questions :-

1) Can we directly display this Image using OpenCV OR we need any external libraries ??

2) If we need to use external Libraries which one is better for this simple task OpenGL, PCL Or any other ??

3) Does PCL support Visual Studio 12 and OpenNI2 and Since PCL comes packed with other version of OpenNI does these two versions conflict??

like image 206
Ashok Varma Avatar asked Mar 26 '15 03:03

Ashok Varma


2 Answers

To improve the answer of antarctician, to display the image in 3D you need to create your point cloud first... The RGB and Depth images give you the necessary data to create an organized colored pointcloud. To do so, you need to calculate the x,y,z values for each point. The z value comes from the depth pixel, but the x and y must be calculated.

to do it you can do something like this:

void Viewer::get_pcl(cv::Mat& color_mat, cv::Mat& depth_mat, pcl::PointCloud<pcl::PointXYZRGBA>& cloud ){
    float x,y,z;

    for (int j = 0; j< depth_mat.rows; j ++){
        for(int i = 0; i < depth_mat.cols; i++){
            // the RGB data is created
            PCD_BGRA   pcd_BGRA;
                       pcd_BGRA.B  = color_mat.at<cv::Vec3b>(j,i)[0];
                       pcd_BGRA.R  = color_mat.at<cv::Vec3b>(j,i)[2];
                       pcd_BGRA.G  = color_mat.at<cv::Vec3b>(j,i)[1];
                       pcd_BGRA.A  = 0;

            pcl::PointXYZRGBA vertex;
            int depth_value = (int) depth_mat.at<unsigned short>(j,i);
            // find the world coordinates
            openni::CoordinateConverter::convertDepthToWorld(depth, i, j, (openni::DepthPixel) depth_mat.at<unsigned short>(j,i), &x, &y,&z );

            // the point is created with depth and color data
            if ( limitx_min <= i && limitx_max >=i && limity_min <= j && limity_max >= j && depth_value != 0 && depth_value <= limitz_max && depth_value >= limitz_min){
                vertex.x   = (float) x;
                vertex.y   = (float) y;
                vertex.z   = (float) depth_value;
            } else {
                // if the data is outside the boundaries
                vertex.x   = bad_point;
                vertex.y   = bad_point;
                vertex.z   = bad_point;
            }
            vertex.rgb = pcd_BGRA.RGB_float;

            // the point is pushed back in the cloud
            cloud.points.push_back( vertex );
        }
    }
}

and PCD_BGRA is

union PCD_BGRA
{
    struct
    {
        uchar B; // LSB
        uchar G; // ---
        uchar R; // MSB
        uchar A; //
    };
    float RGB_float;
    uint  RGB_uint;
};

Of course, this is for the case you want to use PCL, but it is more or less the calculations of the x,y,z values stands. This relies on openni::CoordinateConverter::convertDepthToWorld to find the position of the point in 3D. You may also do this manually

 const float invfocalLength = 1.f / 525.f;
 const float centerX = 319.5f;
 const float centerY = 239.5f;
 const float factor = 1.f / 1000.f;

 float dist = factor * (float)(*depthdata);
 p.x = (x-centerX) * dist * invfocalLength;
 p.y = (y-centerY) * dist * invfocalLength;
 p.z = dist;

Where centerX, centerY, and focallength are the intrinsic calibration of the camera (this one is for Kinect). and the factor it is if you need the distance in meters or millimeters... this value depends on your program

For the questions:

  1. Yes, you can display it using the latest OpenCV with the viz class or with another external library that suits your needs.
  2. OpenGl is nice, but PCL (or OpenCV) is easier to use if you do not know how to use any of them (I mean for displaying pointclouds)
  3. I haven't use it with windows, but in theory it can be used with visual studio 2012. As far as I know the version that PCL comes packed with is OpenNI 1, and it won't affect OpenNI2...
like image 191
api55 Avatar answered Sep 27 '22 18:09

api55


I haven't done this with OpenNI and OpenCV but I hope I can help you. So first of all to answer your first two questions:

  1. Probably yes, as far as I understood you want to visualize a 3D point cloud. OpenCV is only an image processing library and you would need a 3D rendering library to do what you want.
  2. I worked with OpenSceneGraph and would recommend it. However you can also use OpenGL or Direct X.

If you only want to visualize a point cloud such as the "3D View" of the Kinect Studio, you wouldn't need PCL as it would be too much for this simple job.

The basic idea of doing this task is to create 3D quads as the same number of pixels you have on your images. For example if you have a 640x480 resolution, you would need 640*480 quads. Each quad would have the color of the corresponding pixel depending on the pixel values from the color image. You would then move these quads back and forth on the Z axis, depending on the values from the depth image. This can be done with modern OpenGL or if you feel more comfortable with C++, OpenSceneGraph(which is also based on OpenGL).

You would have to be careful about two things:

  1. Drawing so many quads can be slow even on a modern computer. You would need to read about "instanced rendering" to render a large number of instances of an object(in our case a quad) in a single GPU draw call. This can be done using the vertex shader.
  2. Since the RGB and the Depth camera of the Kinect have different physical locations, you would need to calibrate both of them. There are functions to do this in the official Kinect SDK, however I don't know about OpenNI.

If you decide to do this with OpenGL, I would suggest reading about the GPU pipeline if you aren't familiar with it. This would help you to save a lot of time when working with the vertex shaders.

like image 22
antarctician Avatar answered Sep 27 '22 19:09

antarctician