I have been using the Kinect SDK (1.6) DepthBasicsD2D C++ example to take the depth frame from kinect and want to perform blob detection in OpenCV with the data.
I have configured OpenCV with the example and also understood the basic working of the example.
But somehow there's no help anywhere and its difficult to figure out how to take the pixel data from Kinect and pass to OpenCV's IplImage/cv::Mat structure.
Any thought on this problem?
This could help you convert kinect color and depth images and depth images to OpenCV representations:
// get a CV_8U matrix from a Kinect depth frame
cv::Mat * GetDepthImage(USHORT * depthData, int width, int height)
{
const int imageSize = width * height;
cv::Mat * out = new cv::Mat(height, width, CV_8U) ;
// map the values to the depth range
for (int i = 0; i < imageSize; i++)
{
// get the lower 8 bits
USHORT depth = depthData[i];
if (depth >= kLower && depth <= kUpper)
{
float y = c * (depth - kLower);
out->at<byte>(i) = (byte) y;
}
else
{
out->at<byte>(i) = 0;
}
}
return out;
};
// get a CV_8UC4 (RGB) Matrix from Kinect RGB frame
cv::Mat * GetColorImage(unsigned char * bytes, int width, int height)
{
const unsigned int img_size = width * height * 4;
cv::Mat * out = new cv::Mat(height, width, CV_8UC4);
// copy data
memcpy(out->data, bytes, img_size);
return out;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With