I am working on a project aimed to track eye pupil. For this I have made a head-mounted system that captures the images of the eye. Completed with the hardware portion I am struck in software part. I am using opencv. Please let me know what would be the most efficient way to track the pupil. Houghcircles didn't performing well.
After that I have also tried with HSV filter and here is the code and link to screenshot of the raw-image and processed one. Please help me to resolve this issue. The link also contains video of eye pupil that I am using in this code.
https://picasaweb.google.com/118169326982637604860/16November2011?authuser=0&authkey=Gv1sRgCPKwwrGTyvX1Aw&feat=directlink
Code:
include "cv.h"
include"highgui.h"
IplImage* GetThresholdedImage(IplImage* img)
{
IplImage *imgHSV=cvCreateImage(cvGetSize(img),8,3);
cvCvtColor(img,imgHSV,CV_BGR2HSV);
IplImage *imgThresh=cvCreateImage(cvGetSize(img),8,1);
cvInRangeS(imgHSV,cvScalar(0, 84, 0, 0),cvScalar(179, 256, 11, 0),imgThresh);
cvReleaseImage(&imgHSV);
return imgThresh;
}
void main(int *argv,char **argc)
{
IplImage *imgScribble= NULL;
char c=0;
CvCapture *capture;
capture=cvCreateFileCapture("main.avi");
if(!capture)
{
printf("Camera could not be initialized");
exit(0);
}
cvNamedWindow("Simple");
cvNamedWindow("Thresholded");
while(c!=32)
{
IplImage *img=0;
img=cvQueryFrame(capture);
if(!img)
break;
if(imgScribble==NULL)
imgScribble=cvCreateImage(cvGetSize(img),8,3);
IplImage *timg=GetThresholdedImage(img);
CvMoments *moments=(CvMoments*)malloc(sizeof(CvMoments));
cvMoments(timg,moments,1);
double moment10 = cvGetSpatialMoment(moments, 1, 0);
double moment01 = cvGetSpatialMoment(moments, 0, 1);
double area = cvGetCentralMoment(moments, 0, 0);
static int posX = 0;
static int posY = 0;
int lastX = posX;
int lastY = posY;
posX = moment10/area;
posY = moment01/area;
// Print it out for debugging purposes
printf("position (%d,%d)\n", posX, posY);
// We want to draw a line only if its a valid position
if(lastX>0 && lastY>0 && posX>0 && posY>0)
{
// Draw a yellow line from the previous point to the current point
cvLine(imgScribble, cvPoint(posX, posY), cvPoint(lastX, lastY), cvScalar(0,255,255), 5);
}
// Add the scribbling image and the frame...
cvAdd(img, imgScribble, img);
cvShowImage("Simple",img);
cvShowImage("Thresholded",timg);
c=cvWaitKey(3);
cvReleaseImage(&timg);
delete moments;
}
//cvReleaseImage(&img);
cvDestroyWindow("Simple");
cvDestroyWindow("Thresholded");
}
I am able to track the eye and find the center coordinates of pupil precisely.
First I thresholded the image taken by the head mounted camera. After that I have used contour finding algorithm then I find the centroid of all the contours. This gives me the center coordinates of eye pupil, this method is working fine in real time and also detecting eye blinking with very good accuracy.
Now, my aim is to embed this feature into a game(a racing game). In which If I look to left/right then the car moves left/right and If I blink the car slows down. How could I proceed now??? Would I need a game engine to do that?
I heard of some open source game engines compatible with visual studio 2010(unity etc.). Is it feasible??? If yes, how should I proceed ?
I am one of the developers of SimpleCV. We maintain an open-source python library for computer vision. You can download it at SimpleCV.org. SimpleCV is great for solving these types of problems by hacking on the command line. I was able to extract the pupil in only a couple lines of code. Here you go:
img = Image("eye4.jpg") # load the image
bm = BlobMaker() # create the blob extractor
# invert the image so the pupil is white, threshold the image, and invert again
# and then extract the information from the image
blobs = bm.extractFromBinary(img.invert().binarize(thresh=240).invert(),img)
if(len(blobs)>0): # if we got a blob
blobs[0].draw() # the zeroth blob is the largest blob - draw it
locationStr = "("+str(blobs[0].x)+","+str(blobs[0].y)+")"
# write the blob's centroid to the image
img.dl().text(locationStr,(0,0),color=Color.RED)
# save the image
img.save("eye4pupil.png")
# and show us the result.
img.show()
Here are the results.
So your next steps are to use some sort of tracker, like a Kalmann filter, to track the pupil robustly. You may want to model the eye as a sphere and track the pupil's centroid in sphereical coordinates (i.e. theta and phi). You will also want to write a bit of code to detect blink events so the system doesn't go all wonky when the user blinks. I suggest using a canny edge detector to find the largest horizontal lines in the image and assuming those are the eye lids. I hope this helps and please let us know how your work progresses.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With