i'm new to image processing and opencv, but so far the easy to understand functions and good documentation have enabled me to try out and understand upto some level code like facedetection etc.
Now when i detect the faces in the webcam video stream, the program draws a square around the face. Now i want that much area of the image, in the square around the face, to be created as an another image. From what i've been doing, i'm getting a rectangular area of the image in which the face is not even present.
i've used the cv.GetSubRect() and understood its use. Like for eg:
img=cv.LoadImage("C:\opencv\me.jpg")
sub=cv.GetSubRect(img, (700,525,200,119))
cv.NamedWindow("result",1)
cv.ShowImage("result",sub)
But i can't get the face in my face and eye detection program. Here's what i've done:
min_size = (17,17)
#max_size = (30,30)
image_scale = 2
haar_scale = 2
min_neighbors = 2
haar_flags = 0
# Allocate the temporary images
gray = cv.CreateImage((image.width, image.height), 8, 1)
smallImage = cv.CreateImage((cv.Round(image.width / image_scale),cv.Round (image.height / image_scale)), 8 ,1)
#eyeregion = cv.CreateImage((cv.Round(image.width / image_scale),cv.Round (image.height / image_scale)), 8 ,1)
#cv.ShowImage("smallImage",smallImage)
# Convert color input image to grayscale
cv.CvtColor(image, gray, cv.CV_BGR2GRAY)
# Scale input image for faster processing
cv.Resize(gray, smallImage, cv.CV_INTER_LINEAR)
# Equalize the histogram
cv.EqualizeHist(smallImage, smallImage)
# Detect the faces
faces = cv.HaarDetectObjects(smallImage, faceCascade, cv.CreateMemStorage(0),
haar_scale, min_neighbors, haar_flags, min_size)
#, max_size)
# If faces are found
if faces:
for ((x, y, w, h), n) in faces:
# the input to cv.HaarDetectObjects was resized, so scale the
# bounding box of each face and convert it to two CvPoints
pt1 = (int(x * image_scale), int(y * image_scale))
pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))
cv.Rectangle(image, pt1, pt2, cv.RGB(255, 0, 0), 3, 4, 0)
face_region = cv.GetSubRect(image,(x,int(y + (h/4)),w,int(h/2)))
cv.ShowImage("face",face_region)
cv.SetImageROI(image, (pt1[0],
pt1[1],
pt2[0] - pt1[0],
int((pt2[1] - pt1[1]) * 0.7)))
eyes = cv.HaarDetectObjects(image, eyeCascade,
cv.CreateMemStorage(0),
eyes_haar_scale, eyes_min_neighbors,
eyes_haar_flags, eyes_min_size)
if eyes:
# For each eye found
for eye in eyes:
eye[0][0],eye[0][1] are x,y co-ordinates of the top-left corner of detected eye
eye[0][2],eye[0][3] are the width and height of the cvRect of the detected eye region (i mean c'mon, that can be made out from the for loop of the face detection)
# Draw a rectangle around the eye
ept1 = (eye[0][0],eye[0][1])
ept2 = ((eye[0][0]+eye[0][2]),(eye[0][1]+eye[0][3]))
cv.Rectangle(image,ept1,ept2,cv.RGB(0,0,255),1,8,0) # This is working..
ea = ept1[0]
eb = ept1[1]
ec = (ept2[0]-ept1[0])
ed = (ept2[1]-ept1[1])
# i've tried multiplying with image_scale to get the eye region within
# the window of eye but still i'm getting just a top-left area of the
# image, top-left to my head. It does make sense to multiply with image_scale right?
eyeregion=cv.GetSubRect(image, (ea,eb,ec,ed))
cv.ShowImage("eye",eyeregion)
I hope this code is from OpenCV/samples/Python. There is a small mistake in the arguments you have given for the co-ordinates inside cv.GetSubRect. Please replace last two lines of above program with following:
a=pt1[0]
b=pt1[1]
c=pt2[0]-pt1[0]
d=pt2[1]-pt1[1]
face_region = cv.GetSubRect(image,(a,b,c,d))
cv.ShowImage("face",face_region)
Make sure, you have no false detection or multiple detection.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With