Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Proper usage of CIDetectorTracking

Apple recently added a new constant to the CIDetector class called CIDetectorTracking which appears to be able to track faces between frames in a video. This would be very beneficial for me if I could manage to figure out how it works..

I've tried adding this key to the detectors options dictionary using every object I can think of that is remotely relevant including, my AVCaptureStillImageOutput instance, the UIImage I'm working on, YES, 1, etc.

NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyHigh, CIDetectorAccuracy,myAVCaptureStillImageOutput,CIDetectorTracking, nil];

But no matter what parameter I try to pass, it either crashes (obviously I'm guessing at it here) or the debugger outputs:

Unknown CIDetectorTracking specified. Ignoring.

Normally, I wouldn't be guessing at this, but resources on this topic are virtually nonexistent. Apple's class reference states:

A key used to enable or disable face tracking for the detector. Use this option when you want to track faces across frames in a video.

Other than availability being iOS 6+ and OS X 10.8+ that's it.

Comments inside CIDetector.h:

/*The key in the options dictionary used to specify that feature tracking should be used. */

If that wasn't bad enough, a Google search provides 7 results (8 when they find this post) all of which are either Apple class references, API diffs, a SO post asking how to achieve this in iOS 5, or 3rd party copies of the former.

All that being said, any hints or tips for the proper usage of CIDetectorTracking would be greatly appreciated!

like image 352
Mick MacCallum Avatar asked Nov 20 '12 14:11

Mick MacCallum


1 Answers

You're right, this key is not very well documented. Beside the API docs it is also not explained in:

  • the CIDetector.h header file
  • the Core Image Programming Guide
  • the WWDC 2012 Session "520 - What's New in Camera Capture"
  • the sample code to this session (StacheCam 2)

I tried different values for CIDetectorTracking and the only accepted values seem to be @(YES) and @(NO). With other values it prints this message in the console:

Unknown CIDetectorTracking specified. Ignoring.

When you set the value to @(YES) you should get tracking id's with the detected face features.


However when you want to detect faces in content captured from the camera you should prefer the face detection API in AVFoundation. It has face tracking built in and the face detection happens in the background on the GPU and will be much faster than CoreImage face detection It requires iOS 6 and at least an iPhone 4S or iPad 2.

The face are sent as metadata objects (AVMetadataFaceObject) to the AVCaptureMetadataOutputObjectsDelegate.

You can use this code (taken from StacheCam 2 and the slides of the WWDC session mentioned above) to setup face detection and get face metadata objects:

- (void) setupAVFoundationFaceDetection
{       
    self.metadataOutput = [AVCaptureMetadataOutput new];
    if ( ! [self.session canAddOutput:self.metadataOutput] ) {
        return;
    }

    // Metadata processing will be fast, and mostly updating UI which should be done on the main thread
    // So just use the main dispatch queue instead of creating a separate one
    // (compare this to the expensive CoreImage face detection, done on a separate queue)
    [self.metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
    [self.session addOutput:self.metadataOutput];

    if ( ! [self.metadataOutput.availableMetadataObjectTypes containsObject:AVMetadataObjectTypeFace] ) {
        // face detection isn't supported (via AV Foundation), fall back to CoreImage
        return;
    }

    // We only want faces, if we don't set this we would detect everything available
    // (some objects may be expensive to detect, so best form is to select only what you need)
    self.metadataOutput.metadataObjectTypes = @[ AVMetadataObjectTypeFace ];

}

// AVCaptureMetadataOutputObjectsDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
         didOutputMetadataObjects:(NSArray *)metadataObjects
         fromConnection:(AVCaptureConnection *)c
{
   for ( AVMetadataObject *object in metadataObjects ) {
     if ( [[object type] isEqual:AVMetadataObjectTypeFace] ) {
      AVMetadataFaceObject* face = (AVMetadataFaceObject*)object;
      CMTime timestamp = [face time];
      CGRect faceRectangle = [face bounds];
      NSInteger faceID = [face faceID];
      CGFloat rollAngle = [face rollAngle];
      CGFloat yawAngle = [face yawAngle];
      NSNumber* faceID = @(face.faceID); // use this id for tracking
      // Do interesting things with this face
     }
}

If you want to display the face frames in the preview layer you need to get the transformed face object:

AVMetadataFaceObject * adjusted = (AVMetadataFaceObject*)[self.previewLayer transformedMetadataObjectForMetadataObject:face];

For details check out the sample code from WWDC 2012.

like image 140
Felix Avatar answered Sep 22 '22 05:09

Felix