I'm working on a project that can calculate the angle of an elbow joint by image. The part I'm struggling on is the image processing.
Currently doing this in Python using an Intel RealSense R200 (although it can be taken that I'm using an image input).
I'm attempting to detect the edges of the left image, such that I can get the center image, aiming to extract the outer contour (right image):
Knowing that the sides of the two pipes coming out of the angle will be parallel (two orange sides and two green sides are parallel to the same colour)...
... I'm trying to construct 2 loci of points equidistant from the two pairs of colours and then 'extrapolate to the middle' in order to calculate the angle:
I've got as far as the second image and, unreliably, as far as the third image. I'm very open to suggestions and would be hugely grateful of any assistance.
I would use the following approach to try and find the four lines provided in the question.
1. Read the image, and convert it into grayscale
import cv2
import numpy as np
rgb_img = cv2.imread('pipe.jpg')
height, width = gray_img.shape
gray_img = cv2.cvtColor(rgb_img, cv2.COLOR_BGR2GRAY)
2. Add some white padding to the top of the image ( Just to have some extra background )
white_padding = np.zeros((50, width, 3))
white_padding[:, :] = [255, 255, 255]
rgb_img = np.row_stack((white_padding, rgb_img))
Resultant image - 3. Invert the gray scale image and apply black padding to the top
gray_img = 255 - gray_img
gray_img[gray_img > 100] = 255
gray_img[gray_img <= 100] = 0
black_padding = np.zeros((50, width))
gray_img = np.row_stack((black_padding, gray_img))
4.Use Morphological closing to fill the holes in the image -
kernel = np.ones((30, 30), np.uint8)
closing = cv2.morphologyEx(gray_img, cv2.MORPH_CLOSE, kernel)
5. Find edges in the image using Canny edge detection -
edges = cv2.Canny(closing, 100, 200)
6. Now, we can use openCV's HoughLinesP
function to find lines in the given image -
minLineLength = 500
maxLineGap = 10
lines = cv2.HoughLinesP(edges, 1, np.pi / 180, 50, None, 50, 100)
all_lines = lines[0]
for x1,y1,x2,y2 in lines[0]:
cv2.line(rgb_img,(x1,y1),(x2,y2),(0,0,255),2)
7.Now, we have to find the two rightmost horizontal lines, and the two bottommost vertical lines. For the horizontal lines, we will sort the lines using both (x2, x1), in descending order. The first line in this sorted list will be the rightmost vertical line. Skipping that, if we take the next two lines, they will be the rightmost horizontal lines.
all_lines_x_sorted = sorted(all_lines, key=lambda k: (-k[2], -k[0]))
for x1,y1,x2,y2 in all_lines_x_sorted[1:3]:
cv2.line(rgb_img,(x1,y1),(x2,y2),(0,0,255),2)
8. Similarly, the lines can be sorted using the y1 coordinate, in descending order, and the first two lines in the sorted list will be the bottommost vertical lines.
all_lines_y_sorted = sorted(all_lines, key=lambda k: (-k[1]))
for x1,y1,x2,y2 in all_lines_y_sorted[:2]:
cv2.line(rgb_img,(x1,y1),(x2,y2),(0,0,255),2)
9. Image with both lines -
final_lines = all_lines_x_sorted[1:3] + all_lines_y_sorted[:2]
Thus, obtaining these 4 lines can help you finish the rest of your task.
This has many good answers already, none accepted though. I tried something bit different, so thought of posting it even if the question is old. At least someone else might find this useful. This works only if there's nice uniform background as in the sample image.
This will give you a rough estimate.
For the sample image, the code gives
90.868604
42.180990
46.950407
Code is in c++
. You can easily port it if you find this useful.
// helper function:
// finds a cosine of angle between vectors
// from pt0->pt1 and from pt0->pt2
static double angle( Point2f pt1, Point2f pt2, Point2f pt0 )
{
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
int _tmain(int argc, _TCHAR* argv[])
{
Mat rgb = imread("GmHqQ.jpg");
Mat im;
cvtColor(rgb, im, CV_BGR2GRAY);
Ptr<FeatureDetector> detector = FastFeatureDetector::create();
vector<KeyPoint> keypoints;
detector->detect(im, keypoints);
drawKeypoints(im, keypoints, rgb, Scalar(0, 0, 255));
vector<Point2f> points;
for (KeyPoint& kp: keypoints)
{
points.push_back(kp.pt);
}
vector<Point2f> triangle(3);
minEnclosingTriangle(points, triangle);
for (size_t i = 0; i < triangle.size(); i++)
{
line(rgb, triangle[i], triangle[(i + 1) % triangle.size()], Scalar(255, 0, 0), 2);
printf("%f\n", acosf( angle(triangle[i],
triangle[(i + 1) % triangle.size()],
triangle[(i + 2) % triangle.size()]) ) * 180 / CV_PI);
}
return 0;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With