So I have a 2-dimensional array representing a coordinate plane, an image. On that image, I am looking for "red" pixels and finding (hopefully) the location of a red LED target based on all of the red pixels found by my camera. Currently, I'm simply slapping my crosshairs onto the centroid of all of the red pixels:
// pseudo-code
for(cycle_through_pixels)
{
if( is_red(pixel[x][y]) )
{
vals++; // total number of red pixels
cx+=x; // sum the x's
cy+=y; // sum the y's
}
}
cx/=vals; // divide by total to get average x
cy/=vals; // divide by total to get average y
draw_crosshairs_at(pixel[cx][cy]); // found the centroid
The problem with this method is that while this algorithm naturally places the centroid closer to the largest blob (the area with the most red pixels), I am still seeing my crosshairs jump off the target when a bit of red flickers off to the side due to glare or other minor interferences.
My question is this:
How do I change this pattern to look for a more weighted centroid? Put simply, I want to make the larger blobs of red much more important than the smaller ones, possibly even ignoring far-out small blobs altogether.
You could find the connected components in the image and only include those components that have a total size above a certain threshold in your centroid calcuation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With