Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Hu moments comparison

i tried to compare two images and use Hu moment to compare contour extracted from these images: https://docs.google.com/file/d/0ByS6Z5WRz-h2WHEzNnJucDlRR2s/edit and https://docs.google.com/file/d/0ByS6Z5WRz-h2VnZyVWRRWEFva0k/edit The second image is equal to the first only it's rotated and i expected as result same Humoments. They are a little bit different.

Humoments sign on the right (first image):

[[  6.82589151e-01]
[  2.06816713e-01]
[  1.09088295e-01]
[  5.30020870e-03]
[ -5.85888607e-05]
[ -6.85171823e-04]
[ -1.13181280e-04]]

Humoments sign on the right (second image):

[[  6.71793060e-01]
[  1.97521128e-01]
[  9.15619847e-02]
[  9.60179567e-03]
[ -2.44655863e-04]
[ -2.68791106e-03]
[ -1.45592441e-04]]

In this video: http://www.youtube.com/watch?v=O-hCEXi3ymU at 4th minut i watched he obtained exactly the same. Where i wrong?

Here's my code:

nomeimg = "Sassatelli 1984 ruotato.jpg"
#nomeimg = "Sassatelli 1984 n. 165 mod1.jpg"
img = cv2.imread(nomeimg)

gray = cv2.imread(nomeimg,0)
ret,thresh = cv2.threshold(gray,127,255,cv2.THRESH_BINARY_INV) 
element = cv2.getStructuringElement(cv2.MORPH_CROSS,(4,4))
imgbnbin = thresh
imgbnbin = cv2.dilate(imgbnbin, element)

#find contour
contours,hierarchy=cv2.findContours(imgbnbin,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)

#Elimination small contours
Areacontours = list()
    area = cv2.contourArea(contours[i])
    if (area > 90 ):
        Areacontours.append(contours[i])
contours = Areacontours

print('found objects')
print(len(contours))

#contorus[3] for sing in first image
#contours[0] for sign in second image
print("humoments")
mom = cv2.moments(contours[0])
Humoments = cv2.HuMoments(mom)
print(Humoments)
like image 956
postgres Avatar asked Jan 24 '13 01:01

postgres


1 Answers

I think your numbers are probably ok, the differences between them are moderately small. As the guy says in the video you link to (around 3min):

To get some meaningful answers we take a log transform

so if we do -np.sign(a)*np.log10(np.abs(a)) on the data you post above, we get:

First image:

[[ 0.16584062]
 [ 0.68441437]
 [ 0.96222185]
 [ 2.27570703]
 [-4.23218495]
 [-3.16420051]
 [-3.9462254 ]]

Second image:

[[ 0.17276449]
 [ 0.70438644]
 [ 1.0382848 ]
 [ 2.01764754]
 [-3.61144437]
 [-2.57058511]
 [-3.83686117]]

The fact they are not identical is to be expected. You are starting out with rasterized images which you then process quite a lot to get some of the contours which you pass in.

From the opencv docs:

In case of raster images, the computed Hu invariants for the original and transformed images are a bit different.

like image 103
fraxel Avatar answered Oct 22 '22 19:10

fraxel