Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Calculating element position by computing transformation

This question is related to Transformation between two set of points . Hovewer this is better specified, and some assumptions added.

I have element image and some model.

I've detected contours on both

contoursModel0, hierarchyModel = cv2.findContours(model.copy(), cv2.RETR_LIST,   
                                                  cv2.CHAIN_APPROX_SIMPLE);
contoursModel = [cv2.approxPolyDP(cnt, 2, True) for cnt in contoursModel0];    
contours0, hierarchy = cv2.findContours(canny.copy(), cv2.RETR_LIST,  
                                        cv2.CHAIN_APPROX_SIMPLE);
contours = [cv2.approxPolyDP(cnt, 2, True) for cnt in contours0];

Then I've matched each contour to each other

modelMassCenters = [];
imageMassCenters = [];
for cnt in contours:
for cntModel in contoursModel:
    result = cv2.matchShapes(cnt, cntModel, cv2.cv.CV_CONTOURS_MATCH_I1, 0);
    if(result != 0):
        if(result < 0.05):
           #Here are matched contours
           momentsModel = cv2.moments(cntModel);
           momentsImage = cv2.moments(cnt);
           massCenterModel = (momentsModel['m10']/momentsModel['m00'],  
                              momentsModel['m01']/momentsModel['m00']); 
           massCenterImage = (momentsImage['m10']/momentsImage['m00'], 
                              momentsImage['m01']/momentsImage['m00']); 
           modelMassCenters.append(massCenterModel);
           imageMassCenters.append(massCenterImage); 

Matched contours are something like features.

Now I want to detect transformation between this two sets of points. Assumptions: element is rigid body, only rotation, displacement and scale change.

Some features may be miss detected how to eliminate them. I've once used cv2.findHomography and it takes two vectors and calculates homography between them even there are some miss matches.

cv2.getAffineTransformation takes only three points (can't cope missmatches) and here I have multiple features. Answer in my previous question says how to calculate this transformation but does not take missmatches. Also I think that it is possible to return some quality level from algorithm (by checking how many points are missmatched, after computing some transformation from the rest)

And the last question: should I take all vector points to compute transformation or treat only mass centers of this shapes as feature?

To show it I've added simple image. Features with green are good matches in red bad matches. Here match should be computed from 3 green featrues and red missmatches should affect match quality.

enter image description here

I'm adding fragments of solution I've figured out for now (but I think it could be done much better):

for i in range(0, len(modelMassCenters) - 1):
for j in range(i + 1, len(modelMassCenters) - 1  ):
    x1, y1 = modelMassCenters[i];
    x2, y2 = modelMassCenters [j];
    modelVec = (x2 - x1, y2 - y1);
    x1, y1 = imageMassCenters[i];
    x2, y2 = imageMassCenters[j];
    imageVec = (x2 - x1, y2 - y1);
    rotation = angle(modelVec,imageVec);
    rotations.append((i, j, rotation)); 
    scale = length(modelVec)/length(imageVec);
    scales.append((i, j,  scale)); 

After computing scales and rotation given by each pair of corresponding lines I'm going to find median value and average values of rotation which does not differ more than some delta from median value. The same thing with scale. Then points which are making those values taken to computation will be used to compute displacement.

like image 839
krzych Avatar asked Nov 23 '25 16:11

krzych


1 Answers

Your second step (match contours to each other by doing a pairwise shape comparison) sounds very vulnerable to errors if features have a similar shape, e.g., you have several similar-sized circular contours. Yet if you have a rigid body with 5 circular features in one quadrant only, you could get a very robust estimate of the affine transform if you consider the body and its features as a whole. So don't discard information like a feature's range and direction from the center of the whole body when matching features. Those are at least as important in correlating features as size and shape of the individual contour.

I'd try something like (untested pseudocode):

"""
Convert from rectangular (x,y) to polar (r,w)
    r = sqrt(x^2 + y^2)
    w = arctan(y/x) = [-\pi,\pi]
"""
def polar(x, y):        # w in radians
    from math import hypot, atan2, pi
    return hypot(x, y), atan2(y, x)

model_features = []
model = params(model_body_contour)    # return tuple (center_x, center_y, area)
for contour in model_feature_contours:
    f = params(countour)
    range, angle = polar(f[0]-model[0], f[1]-model[1])
    model_features.append((angle, range, f[2]))

image_features = []
image = params(image_body_contour)
for contour in image_feature_contours:
    f = params(countour)
    range, angle = polar(f[0]-image[0], f[1]-image[1])
    image_features.append((angle, range, f[2]))

# sort image_features and model_features by angle, range
#
# correlate image_features against model_features across angle offsets
#    rotation = angle offset of max correlation
#    scale = average(model areas and ranges) / average(image areas and ranges)

If you have very challenging images, such as a ring of 6 equally-spaced similar-sized features, 5 of which have the same shape and one is different (e.g. 5 circles and a star), you could add extra parameters such as eccentricity and sharpness to the list of feature parameters, and include them in the correlation when searching for the rotation angle.

like image 68
Dave Avatar answered Nov 26 '25 04:11

Dave



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!