I would like to develop a Python OpenCV script to duplicate/improve on a Gimp procedure I have developed. The goal of the procedure is to provide an x,y point array that follows the dividing line between grass and hard surfaces. This array will allow me to finish my 500 lb 54" wide pressure washing robot, which has a Raspberry Pi Zero (and camera), so that it can follow that edge at a speed of a couple inches per second. I will be monitoring and/or controlling the bot via its wifi video stream and an iPhone app while I watch TV on my couch.
Here is a sample original image (60x80 pixels):
The Gimp procedure is:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN"
"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
<svg xmlns="http://www.w3.org/2000/svg"
width="0.833333in" height="1.11111in"
viewBox="0 0 60 80">
<path id="Selection"
fill="none" stroke="black" stroke-width="1"
d="M 60.00,0.00
C 60.00,0.00 60.00,80.00 60.00,80.00
60.00,80.00 29.04,80.00 29.04,80.00
29.04,80.00 29.04,73.00 29.04,73.00
29.04,73.00 30.00,61.00 30.00,61.00
30.00,61.00 30.00,41.00 30.00,41.00
30.00,41.00 29.00,30.85 29.00,30.85
29.00,30.85 24.00,30.85 24.00,30.85
24.00,30.85 0.00,39.00 0.00,39.00
0.00,39.00 0.00,0.00 0.00,0.00
0.00,0.00 60.00,0.00 60.00,0.00 Z" />
</svg>
My goal for execution time for this OpenCV procedure on my Pi Zero is about 1-2 seconds or less (currently taking ~0.18 secs).
I have cobbled together something that sortof results in the sameish points that are in the Gimp xml file. I am not sure at all if it is doing what Gimp does with regard to the hue range of the mask. I have not yet figured out how to apply the minimum radius on the mask, I am pretty sure I will need that when the mask gets a 'grass' clump on the edge of the hard surface as part of the mask. Here are all the contour points so far (ptscanvas.bmp):
As of 7/6/2018 5:08 pm EST, here is the 'still messy' script that sortof works and found those points;
import numpy as np
import time, sys, cv2
img = cv2.imread('2-60.JPG')
cv2.imshow('Original',img)
# get a blank pntscanvas for drawing points on
pntscanvas = np.zeros(img.shape, np.uint8)
print (sys.version)
if sys.version_info[0] < 3:
raise Exception("Python 3 or a more recent version is required.")
def doredo():
start_time = time.time()
# Use kmeans to convert to 2 color image
hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
Z = hsv_img.reshape((-1,3))
Z = np.float32(Z)
# define criteria, number of clusters(K)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 2
ret,label,center=cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
# Create a mask by selecting a hue range around the lowest hue of the 2 colors
if center[0,0] < center[1,0]:
hueofinterest = center[0,0]
else:
hueofinterest = center[1,0]
hsvdelta = 8
lowv = np.array([hueofinterest - hsvdelta, 0, 0])
higv = np.array([hueofinterest + hsvdelta, 255, 255])
mask = cv2.inRange(hsv_img, lowv, higv)
# Extract contours from the mask
ret,thresh = cv2.threshold(mask,250,255,cv2.THRESH_BINARY_INV)
im2,contours,hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
# Find the biggest area contour
cnt = contours[0]
max_area = cv2.contourArea(cnt)
for cont in contours:
if cv2.contourArea(cont) > max_area:
cnt = cont
max_area = cv2.contourArea(cont)
# Make array of all edge points of the largets contour, named allpnts
perimeter = cv2.arcLength(cnt,True)
epsilon = 0.01*cv2.arcLength(cnt,True) # 0.0125*cv2.arcLength(cnt,True) seems to work better
allpnts = cv2.approxPolyDP(cnt,epsilon,True)
end_time = time.time()
print("Elapsed cv2 time was %g seconds" % (end_time - start_time))
# Convert back into uint8, and make 2 color image for saving and showing
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((hsv_img.shape))
# Save, show and print stuff
cv2.drawContours(pntscanvas, allpnts, -1, (0, 0, 255), 2)
cv2.imwrite("pntscanvas.bmp", pntscanvas)
cv2.imshow("pntscanvas.bmp", pntscanvas)
print('allpnts')
print(allpnts)
print("center")
print(center)
print('lowv',lowv)
print('higv',higv)
cv2.imwrite('mask.bmp',mask)
cv2.imshow('mask.bmp',mask)
cv2.imwrite('CvKmeans2Color.bmp',res2)
cv2.imshow('CvKmeans2Color.bmp',res2)
print ("Waiting for 'Spacebar' to Do/Redo OR 'Esc' to Exit")
while(1):
ch = cv2.waitKey(50)
if ch == 27:
break
if ch == ord(' '):
doredo()
cv2.destroyAllWindows()
Left to do:
1a. EDIT: As of July 9, 2018, I have been concentrating on this issue as it seems to be my biggest problem. I am unable to have cv2.findcontours smooth out the 'edge grass' as well as Gimp does with its magic wand radius feature. Here on the left, is a 2 colour 'problem' mask and the overlaid resultant 'Red' points that are found directly using cv2.findcontours and on the right, the Gimp radiused mask applied to the left images 'problem' mask before cv2.findcontours is applied to it, resulting in the right image and points:
I have tried looking at Gimps source code but it is way beyond my comprehension and I can not find any OpenCV routines that can do this. Is there a way to apply a minimum radius smoothing to the 'non-edge' pixels of an edge mask in OpenCV??? By 'non-edge' I mean that as you can see Gimp does not radius these 'corners' (inside Yellow highlight) but only seems to apply the radius smoothing to edges 'inside' the image (Note: Gimps radiusing algorithm eliminates all the small islands in the mask which means that you don't have to find the largest area contour after cv2.findcontours is applied to get the points of interest):
EDIT: As of 5pm EST July 12, 2018: I have resorted to the language I can most easily create code with, VB6, ughh, I know. Anyway I have been able to make a line/edge smoothing routine that works on the pixel level to do the min radius mask I want. It works like a PacMan roaming along the right side of an edge as close at it can and leaves behind a breadcrumb trail on the Pac's left side. Not sure I can make a python script from that code but at least I have a place to start as nobody has confirmed that there is an OpenCV alternative way to do it. If anyone is interested here is a compiled .exe file that should run on most windows systems without an install (I think). Here is a screenshot from it (Blue/GreenyBlue pixels are the unsmoothed edge and Green/GreenyBlue pixels are the radiused edge):
You can get the gist of my process logic by this VB6 routine:
Sub BeginFollowingEdgePixel()
Dim lastwasend As Integer
wasinside = False
While (1)
If HitFrontBumper Then
GoTo Hit
Else
Call MoveForward
End If
If circr = orgpos(0) And circc = orgpos(1) Then
orgpixr = -1 'resets Start/Next button to begin at first first found blue edge pixel
GoTo outnow 'this condition indicates that you have followed all blue edge pixels
End If
Call PaintUnderFrontBumperWhite
Call PaintGreenOutsideLeftBumper
nomove:
If NoLeftBumperContact Then
Call MoveLeft
Call PaintUnderLeftBumperWhite
Call PaintGreenOutsideLeftBumper
If NoLeftBumperContact Then
If BackBumperContact Then
Call MakeLeftTheNewForward
End If
End If
ElseIf HitFrontBumper Then
Hit:
Call PaintAheadOfForwardBumperGreen
Call PaintGreenOutsideLeftSide
Call MakeRightTheNewForward
GoTo nomove
Else
Call PaintAheadOfForwardBumperGreen
Call PaintGreenOutsideLeftSide
Call PaintUnderFrontBumperWhite
End If
If (circr = 19 + circrad Or circr = -circrad Or circc = 19 + circrad Or circc = -circrad) Then
If lastwasend = 0 And wasinside = True Then
'finished following one edge pixel
lastwasend = 1
GoTo outnow
Call redrawit
End If
Else
If IsCircleInsideImage Then
wasinside = True
End If
lastwasend = 0
End If
Pause (pausev) 'seconds between moves - Pressing Esc advances early
Wend
outnow:
End Sub
Okay, I finally had time to look at this. I will address each point of yours and then show the changes in the code. Let me know if you have any questions, or suggestions.
Looks like you were able to do this yourself well enough.
1.a. This can be taken care of by blurring the image before doing any processing to it. The following changes to the code were made to accomplish this;
...
start_time = time.time()
blur_img = cv2.GaussianBlur(img,(5,5),0) #here
# Use kmeans to convert to 2 color image
hsv_img = cv2.cvtColor(blur_img, cv2.COLOR_BGR2HSV)
...
I have changed the code to remove points that are on a line that perfectly follows the side of the image. It should be basically impossible for a grass edge to also coincide with this.
...
allpnts = cv2.approxPolyDP(cnt,epsilon,True)
new_allpnts = []
for i in range(len(allpnts)):
a = (i-1) % len(allpnts)
b = (i+1) % len(allpnts)
if ((allpnts[i,0,0] == 0 or allpnts[i,0,0] == (img.shape[1]-1)) and (allpnts[i,0,1] == 0 or allpnts[i,0,1] == (img.shape[0]-1))):
tmp1 = allpnts[a,0] - allpnts[i,0]
tmp2 = allpnts[b,0] - allpnts[i,0]
if not (0 in tmp1 and 0 in tmp2):
new_allpnts.append(allpnts[i])
else:
new_allpnts.append(allpnts[i])
...
cv2.drawContours(pntscanvas, new_allpnts, -1, (0, 0, 255), 2)
...
Due to how the contours are found in the image, we can simply flip the thresholding function and find the contour around the other part of the image. Changes are below;
...
#Extract contours from the mask
ret,thresh = cv2.threshold(mask,250,255,cv2.THRESH_BINARY) #here
im2,contours,hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
...
As for the color differences, you have converted your image into HSV format and before saving you are not switching it back to BGR. This change to HSV does give you better results so I would keep it, but it is a different palette. Changes are below;
...
cv2.imshow('mask.bmp',mask)
res2 = cv2.cvtColor(res2, cv2.COLOR_HSV2BGR)
cv2.imwrite('CvKmeans2Color.bmp',res2)
cv2.imshow('CvKmeans2Color.bmp',res2)
...
Disclaimer: These changes are based off of the python code from above. Any changes to the python code that are not in the provide code my render my changes ineffective.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With