I am trying to use AVFoundation framework to capture a 'series' of still images from AVCaptureStillImageOutput QUICKLY, like the burst mode in some cameras. I want to use the completion handler,
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
and pass the imageSampleBuffer to an NSOperation object for later processing. However i cant find a way to retain the buffer in the NSOperation class.
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
//Add to queue
SaveImageDataOperation *saveOperation = [[SaveImageDataOperation alloc] initWithImageBuffer:imageSampleBuffer];
[_saveDataQueue addOperation:saveOperation];
[saveOperation release];
//Continue
[self captureCompleted];
}];
Does any one know what I maybe doing wrong here? Is there a better approach to do this?
I've been doing a lot of work with CMSampleBuffer objects recently and I've learned that most of the media buffers sourced by the OS during real-time operations are allocated from pools. If AVFoundation (or CoreVideo/CoreMedia) runs out of buffers in a pool (ie. you CFRetain a buffer for a 'long' time), the real time aspect of the process is going to suffer or block until you CFRelease the buffer back into the pool.
So, in addition to manipulating the CFRetain/CFRelease count on the CMSampleBuffer you should only keep the buffer retained long enough to unpack (deep copy the bits) in the CMBlockBuffer/CMFormat and create a new CMSampleBuffer to pass to your NSOperationQueue or dispatch_queue_t for later processing.
In my situation I wanted to pass compressed CMSampleBuffers from the VideoToolbox over a network. I essentially created a deep copy of the CMSampleBuffer, with my application having full control over the memory allocation/lifetime. From there, I put the copied CMSampleBuffer on a queue for the network I/O to consume.
If the sample data is compressed, deep copying should be relatively fast. In my application, I used NSKeyedArchiver to create an NSData object from the relevant parts of the source CMSampleBuffer. For H.264 video data, that meant the CMBlockBuffer contents, the SPS/PPS header bytes and also the SampleTimingInfo. By serializing those elements I could reconstruct a CMSampleBuffer on the other end of the network that behaved identically to to the one that VideoToolbox had given me. In particular, AVSampleBufferLayer was able to display them as if they were natively sourced on the machine.
For your application I would recommend the following:
Since the AVFoundation movie recorder can do steps #1 and #2 in real time without running out of buffers, you should be able to deep copy and enqueue your data on a dispatch_queue without exhausting the buffer pools used by the video capture component and VideoToolbox components.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With