I try to find centroid using OpenCV C++'s cv::Moments. Whatever arguments I submit to it, All I receive back are zeros. Clearly I do something very simply wrong. Output of code:
23 of 500 elements in unit 3
point values 2.976444 18.248287
matrix size 23
moments 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
moments 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
And code:
printf("%d of %d elements in unit %d\n",k,number_of_features,i);
cv::Mat x(k, 1, cv::DataType<cv::Point2f>::type);
k=0;
for(int j=0;j <number_of_features;j++) {
if(i == labels.at<int>(j)) {
x.at<cv::Point2f>(k++) = samples.at<cv::Point2f>(i);
}
}
printf("point values %f %f\n", x.at<cv::Point2f>(0).x,x.at<cv::Point2f>(0).y);
cv::Size s=x.size();
printf("matrix size %d\n",s.height);
cv::Moments m=cv::moments(x);
printf("moments %f %f %f %f %f %f %f %f\n",m.m00,m.m01,m.m20,m.m11,m.m02,m.m30,m.m21,m.m03);
double h[7];
cv::HuMoments(m,h);
printf("moments %f %f %f %f %f %f %f\n",h[0],h[1],h[2],h[3],h[4],h[5],h[6]);
Strangely I cannot find any identical code sample from Google. All I see are C style approaches.
In OpenCV, moments are the average of the intensities of an image's pixels. Segmentation is changing the representation of an image by dividing it into pixel segments to analyze the image easily. After segmentation, we use image OpenCV moments to describe several objects in the image.
Hu Moments ( or rather Hu moment invariants ) are a set of 7 numbers calculated using central moments that are invariant to image transformations. The first 6 moments have been proved to be invariant to translation, scale, and rotation, and reflection. While the 7th moment's sign changes for image reflection.
We make use of a function in OpenCV called approxPolyDP() function to perform an approximation of a shape of a contour. The image of a polygon whose shape of a contour must be approximated is read using the imread() function. Then the input image is converted into a grayscale image.
Using moments to find a centroid is a bit overkill imho. You can use the following algorithm to do it:
sumX = 0; sumY = 0;
size = array_points.size;
if(size > 0){
foreach(point in array_points){
sumX += point.x;
sumY += point.y;
}
centroid.x = sumX/size;
centroid.y = sumY/size;
}
Or with the help of Opencv's boundingRect:
//pseudo-code:
Rect bRect = Imgproc.boundingRect(array_points);
centroid.x = bRect.x + (bRect.width / 2);
centroid.y = bRect.y + (bRect.height / 2);
I would recommend you to visit Official Tutorial on moments. Learn and run that code first.
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/moments/moments.html#moments
Once it is success, try to implement whatever you want.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With