Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Precision Measurement with Opencv python

I am actually working on a Machine Vision project using OpenCV and Python.

Objective : The objective of the project is to measure the dimensions of a component with high accuracy.

Main Hardware :

  • Basler 5MP camera (aca-2500-14gm)

  • A red backlight (100 mm x 100 mm) (Size of my component is around 60mm)

Experiment

Since I am Looking at very tight tolerance limits, I first did a precision study. I kept the component on the backlight source and took 100 images without moving the part (imagine like a video with 100 frames). I measured the Outer Diameter(OD) of all the 100 images. My mm/pixel ratio is 0.042. I measured the standard deviation of the measurement to find out the precision, which turned out to be around 0.03 mm which is bad. The component nor the setup is touched thus I was expecting a precision of 0.005 mm. But I am off by an order of magnitude. I am using OpenCV's Hough circle to calculate the OD of the component.

Code:

import sys
import pickle
import cv2
import matplotlib.pyplot as plt
import glob
import os
import numpy as np
import pandas as pd

def find_circles(image,dp=1.7,minDist=100,param1=50,param2=50,minRadius=0,maxRadius=0):
    """ finds the center of circular objects in image using hough circle transform

    Keyword arguments
    image -- uint8: numpy ndarray of a single image (no default).
    dp -- Inverse ratio of the accumulator resolution to the image resolution (default 1.7).
    minDist -- Minimum distance in pixel distance between the centers of the detected circles (default 100).
    param1 -- First method-specific parameter (default = 50).
    param2 -- Second method-specific parameter (default = 50).
    minRadius -- Minimum circle radius in pixel distance (default = 0).
    maxRadius -- Maximum circle radius in pixel distance (default = 0).

    Output
    center -- tuple: (x,y).
    radius -- int : radius.
    ERROR if circle is not detected. returns(-1) in this case    
    """

    circles=cv2.HoughCircles(image, 
                             cv2.HOUGH_GRADIENT, 
                             dp = dp, 
                             minDist = minDist, 
                             param1=param1, 
                             param2=param2, 
                             minRadius=minRadius, 
                             maxRadius=maxRadius)
    if circles is not None:
            circles = circles.reshape(circles.shape[1],circles.shape[2])
            return(circles)
    else:
        raise ValueError("ERROR!!!!!! circle not detected try tweaking the parameters or the min and max radius")

def find_od(image_path_list):
    image_path_list.sort()
    print(len(image_path_list))
    result_df = pd.DataFrame(columns=["component_name","measured_dia_pixels","center_in_pixels"])
    for i,name in enumerate(image_path_list):
        img = cv2.imread(name,0) # read the image in grayscale
        ret,thresh_img = cv2.threshold(img, 50, 255, cv2.THRESH_BINARY_INV)
        thresh_img = cv2.bilateralFilter(thresh_img,5,91,91) #smoothing
        edges = cv2.Canny(thresh_img,100,200)
        circles = find_circles(edges,dp=1.7,minDist=100,param1=50,param2=30,minRadius=685,maxRadius=700)
        circles = np.squeeze(circles)
        result_df.loc[i] = os.path.basename(name),circles[2]*2,(circles[0],circles[1])
    result_df.sort_values("component_name",inplace=True)
    result_df.reset_index(drop=True,inplace=True)
    return(result_df)

df = find_od(glob.glob("./images/*"))
mean_d = df.measured_dia_pixels.mean()
std_deviation = np.sqrt(np.mean(np.square([abs(x-mean_d) for x in df.measured_dia_pixels])))

mm_per_pixel = 0.042
print(std_deviation * mm_per_pixel)

OUTPUT: 0.024

The image of the component:

enter image description here

Since the Images are taken without disturbing the setup, I expect the measurement's repeatability to be around 0.005 mm (5 microns) (For 100 images).But this is not so. Is it a problem of hough circle? or what am I missing here

like image 905
Abhijit Balaji Avatar asked Mar 28 '18 09:03

Abhijit Balaji


People also ask

How do we measure items on drawing with OpenCV?

The most common way is to perform a checkerboard camera calibration using OpenCV. Doing so will remove radial distortion and tangential distortion, both of which impact the output image, and therefore the output measurement of objects in the image.

How does OpenCV calculate distance?

Steps for Distance Estimation: Capture Reference Image: Measure the distance from the object(face) to the camera, capture a Reference image and note down the measured distance. Measure the object (face) width, make sure that measurement units are kept for reference image and object(face) width. Mine Reference Image.

How do you extract the height and weight of an image using OpenCV?

When working with OpenCV Python, images are stored in numpy ndarray. To get the image shape or size, use ndarray. shape to get the dimensions of the image. Then, you can use index on the dimensions variable to get width, height and number of channels for each pixel.


1 Answers

Hough is designed for detecting, not for quantifying. If you want precise measures, you'll have to use a library designed for that. OpenCV is not meant for quantification, and consequently has poor capabilities there.

A long time ago I wrote a paper about more precise estimates of size using the Radon transform (the Hough transform is one way of discretizing the Radon transform, it's fast for some cases, but not precise):

  • C.L. Luengo Hendriks, M. van Ginkel, P.W. Verbeek and L.J. van Vliet, The generalized Radon transform: sampling, accuracy and memory considerations, Pattern Recognition 38(12):2494–2505, 2005, doi:10.1016/j.patcog.2005.04.018. Here is a PDF.

But because your setup is so well controlled, you don't really need all that to get a precise measure. Here is a very straight-forward Python script to quantify these holes:

import diplib as dip
import math

# Load image and set pixel size
img = dip.ImageRead('N6uqw.jpg')
img.SetPixelSize(0.042, "mm")

# Extract object
obj = ~dip.Threshold(dip.Gauss(img))[0]
obj = dip.EdgeObjectsRemove(obj)

# Remove noise
obj = dip.Opening(dip.Closing(obj,9),9)

# Measure object area
lab = dip.Label(obj)
msr = dip.MeasurementTool.Measure(lab,img,['Size'])
objectArea = msr[1]['Size'][0]

# Measure holes
obj = dip.EdgeObjectsRemove(~obj)
lab = dip.Label(obj)
msr = dip.MeasurementTool.Measure(lab,img,['Size'])
sz = msr['Size']
holeAreas = []
for ii in sz.Objects():
   holeAreas.append(sz[ii][0])

# Add hole areas to main object area
objectArea += sum(holeAreas)

print('Object diameter = %f mm' % (2 * math.sqrt(objectArea / math.pi)))
for a in holeAreas:
   print('Hole diameter = %f mm' % (2 * math.sqrt(a / math.pi)))

This gives me the output:

Object diameter = 57.947768 mm
Hole diameter = 6.540086 mm
Hole diameter = 6.695357 mm
Hole diameter = 15.961935 mm
Hole diameter = 6.511002 mm
Hole diameter = 6.623011 mm

Note that there are a lot of assumptions in the code above. There is also an issue with the camera not being centered exactly above the object, you can see the right side of the holes reflecting light. This will certainly add imprecision to these measurements. But note also that I didn't use the knowledge that the object is round when measuring it (only when converting area to diameter). It might be possible to use the roundness criterion to overcome some of the imaging imperfections.

The code above uses DIPlib, a C++ library with a rather rough Python interface. The Python syntax is a direct translation of the C++ syntax, and some things are still quite awkward and non-Pythonic. But it is aimed specifically at quantification, and hence I recommend you try it out for your application.

like image 61
Cris Luengo Avatar answered Oct 05 '22 12:10

Cris Luengo