I have recently been working on a solution to an object-tracking problem. What I need to do is identify and track 3D-objects that may move on a 2D-plane, i.e. translation in x and y, and rotation around z. The object that is to be tracked is known beforehand and any desired information may be extracted from it. It is also assumed that lighting conditions will not undergo any severe changes, and that the background will remain relatively stationary. The objects to be tracked will typically not be of a single colour, and so tracking by colour is not an option.
I have successfully implemented a prototype for tracking multiple 2D-objects using background subtraction and dynamic template matching. I now want to expand to tracking 3D-objects, but so far I have been disappointed in what I have found/achieved. I will list some of the attempts that I have made in the hope that someone may shed some light.
1.) Dynamic Template matching: I would have the user select the object in a videoframe, and then a search area is defined around the object. The object would then be searched for inside this area. This clip gave me the idea originally. Unfortunately, this did not really work for me, as the object is lost when it undergoes rotation (turns its back to the camera). I also tried to continuously update the template when the object is found, but this lead to the template becoming another (foreign) object whenever the intended object becomes occluded.
2.) Lucas-Kanade optical flow: I used OpenCV's goodFeaturesToTrack to find me some good points to track, and attempted to track these points through multiple frames using calcOpticalFlowPyrLK. However, the performance of this algorithm was a little disappointing. I applied it to the Oxford Corridor data set, but the points that I had originally detected would soon be lost.
3.) SURF: I tried to detect features with SURF, but the problem here was that it was very difficult to apply this to 3D-objects that may differ considerably from different view angles. I was hoping to find documentation on cv2's SURF, as this seemed to provide functionality to supply the SURF Feature extractor with keypoints (perhaps from goodFeaturesToTrack). Unfortunately, I could not yet find a way to do this. My question on S.O.: OpenCV: Extract SURF Features from user-defined keypoints
Background: I have a single, stationary webcam, with all my processing done on a desktop computer. I am using the Python wrapper of OpenCV, and the PyDev plugin for Eclipse, on Windows 7.
If anyone could suggest any additional techniques to try, or even some pointers to improve performance of already-mentioned techniques, I would greatly appreciate it.
Without seeing the object you want to track it's difficult to come up with some useful suggestions. If you take a few pictures to show us the problem that would definitely help the discussion. People here also sometimes implement their suggestions/ideas and share with you.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With