Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Track Eye Pupil Position with Webcam, OpenCV, and Python

I am trying to build a robot that I can control with basic eye movements. I am pointing a webcam at my face, and depending on the position of my pupil, the robot would move a certain way. If the pupil is in the top, bottom, left corner, right corner of the eye the robot would move forwards, backwards, left, right respectively.

My original plan was to use an eye haar cascade to find my left eye. I would then use houghcircle on the eye region to find the center of the pupil. I would determine where the pupil was in the eye by finding the distance from the center of the houghcircle to the borders of the general eye region.

So for the first part of my code, I'm hoping to be able to track the center of the eye pupil, as seen in this video. https://youtu.be/aGmGyFLQAFM?t=38

But when I run my code, it cannot consistently find the center of the pupil. The houghcircle is often drawn in the wrong area. How can I make my program consistently find the center of the pupil, even when the eye moves?

Is it possible/better/easier for me to tell my program where the pupil is at the beginning? I've looked at some other eye tracking methods, but I cannot form a general algorithm. If anyone could help form one, that would be much appreciated! https://arxiv.org/ftp/arxiv/papers/1202/1202.6517.pdf

import numpy as np
import cv2

face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_righteye_2splits.xml')

#number signifies camera
cap = cv2.VideoCapture(0)

while 1:
    ret, img = cap.read()
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    #faces = face_cascade.detectMultiScale(gray, 1.3, 5)
    eyes = eye_cascade.detectMultiScale(gray)
    for (ex,ey,ew,eh) in eyes:
        cv2.rectangle(img,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
        roi_gray2 = gray[ey:ey+eh, ex:ex+ew]
        roi_color2 = img[ey:ey+eh, ex:ex+ew]
        circles = cv2.HoughCircles(roi_gray2,cv2.HOUGH_GRADIENT,1,20,param1=50,param2=30,minRadius=0,maxRadius=0)
        try:
            for i in circles[0,:]:
                # draw the outer circle
                cv2.circle(roi_color2,(i[0],i[1]),i[2],(255,255,255),2)
                print("drawing circle")
                # draw the center of the circle
                cv2.circle(roi_color2,(i[0],i[1]),2,(255,255,255),3)
        except Exception as e:
            print e
    cv2.imshow('img',img)
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release()
cv2.destroyAllWindows()
like image 558
user3502541 Avatar asked Aug 21 '17 04:08

user3502541


2 Answers

I can see two alternatives, from some work that I did before:

  1. Train a Haar detector to detect the eyeball, using training images with the center of the pupil at the center and the width of the eyeball as width. I found this better than using Hough circles or just the original eye detector of OpenCV (the one used in your code).

  2. Use Dlib's face landmark points to estimate the eye region. Then use the contrast caused by the white and dark regions of the eyeball, together with contours, to estimate the center of the pupil. This produced much better results.

like image 148
Totoro Avatar answered Oct 27 '22 01:10

Totoro


Just replace line where you created HoughCircles by this:

circles = cv2.HoughCircles(roi_gray2,cv2.HOUGH_GRADIENT,1,200,param1=200,param2=1,minRadius=0,maxRadius=0)

I just changed a couple of parameters and it gives me more accuracy.

Detailed information about parameters here.

like image 23
Петро Здебський Avatar answered Oct 26 '22 23:10

Петро Здебський