Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Real-time camera self-calibration OpenCV

I'm developing an augmented reality application - a virtual try-on, using OpenCV + OpenGL + QtCreator - and I'm stuck now at calibrating the camera. I found a lot of resources about the calibration process in OpenCV using the chessboard pattern, but I need to implement some sort of self-calibration, so that's not helpful. I know it can be done, but didn't really find anything useful. I found this research http://www.eidelen.ch/thesis/MscThesisV1.0.pdf in which a self-calibration process is described (chapter 4), but I'm not sure if that's the way to go. What I want to achieve can be seen at http://www.ray-ban.com/usa/virtual-mirror. I just want to know how do they calibrate.

like image 683
joanna Avatar asked Nov 10 '22 08:11

joanna


1 Answers

For camera calibration you need to know a set of real coordinates in the world. The chessboard gives you that since you know the size and shape of the squares, so you can correlate pixel locations with measurements in the real world.

You'll see that in Schneider's thesis he uses a 3D tracking unit (Figure 3.1) to give him the real-world coordinates of the points. One he has those, it's a similar problem to the chessboard.

In the virtual mirror example, I don't know but I'd guess that they are using a face detection system, and thus do not need a calibrated image. Something like: http://www.vision.caltech.edu/html-files/EE148-2005-Spring/pprs/viola04ijcv.pdf

For your system that might make more sense. Lots of people do face detection in OpenCV, so there's plenty around on that. You might start here: http://docs.opencv.org/trunk/modules/contrib/doc/facerec/facerec_tutorial.html

like image 188
abarry Avatar answered Dec 08 '22 21:12

abarry