I am new to iOS programming and multimedia and I was going through a sample project named RosyWriter provided by apple at this link. Here I saw that in the code there is a function named captureOutput:didOutputSampleBuffer:fromConnection
in the code given below:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
if ( connection == videoConnection ) {
// Get framerate
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
[self calculateFramerateAtTimestamp:timestamp];
// Get frame dimensions (for onscreen display)
if (self.videoDimensions.width == 0 && self.videoDimensions.height == 0)
self.videoDimensions = CMVideoFormatDescriptionGetDimensions( formatDescription );
// Get buffer type
if ( self.videoType == 0 )
self.videoType = CMFormatDescriptionGetMediaSubType( formatDescription );
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Synchronously process the pixel buffer to de-green it.
[self processPixelBuffer:pixelBuffer];
// Enqueue it for preview. This is a shallow queue, so if image processing is taking too long,
// we'll drop this frame for preview (this keeps preview latency low).
OSStatus err = CMBufferQueueEnqueue(previewBufferQueue, sampleBuffer);
if ( !err ) {
dispatch_async(dispatch_get_main_queue(), ^{
CMSampleBufferRef sbuf = (CMSampleBufferRef)CMBufferQueueDequeueAndRetain(previewBufferQueue);
if (sbuf) {
CVImageBufferRef pixBuf = CMSampleBufferGetImageBuffer(sbuf);
[self.delegate pixelBufferReadyForDisplay:pixBuf];
CFRelease(sbuf);
}
});
}
}
CFRetain(sampleBuffer);
CFRetain(formatDescription);
dispatch_async(movieWritingQueue, ^{
if ( assetWriter ) {
BOOL wasReadyToRecord = (readyToRecordAudio && readyToRecordVideo);
if (connection == videoConnection) {
// Initialize the video input if this is not done yet
if (!readyToRecordVideo)
readyToRecordVideo = [self setupAssetWriterVideoInput:formatDescription];
// Write video data to file
if (readyToRecordVideo && readyToRecordAudio)
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeVideo];
}
else if (connection == audioConnection) {
// Initialize the audio input if this is not done yet
if (!readyToRecordAudio)
readyToRecordAudio = [self setupAssetWriterAudioInput:formatDescription];
// Write audio data to file
if (readyToRecordAudio && readyToRecordVideo)
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeAudio];
}
BOOL isReadyToRecord = (readyToRecordAudio && readyToRecordVideo);
if ( !wasReadyToRecord && isReadyToRecord ) {
recordingWillBeStarted = NO;
self.recording = YES;
[self.delegate recordingDidStart];
}
}
CFRelease(sampleBuffer);
CFRelease(formatDescription);
});
}
Here a function named pixelBufferReadyForDisplay
is called which expects a parameter of type CVPixelBufferRef
Prototype of pixelBufferReadyForDisplay
- (void)pixelBufferReadyForDisplay:(CVPixelBufferRef)pixelBuffer;
But in the code above while calling this function it passes the variable pixBuf
which is of type CVImageBufferRef
So my question is that isn't it required to use any function or typecasting to convert a CVImageBufferRef to CVPixelBufferRef or is this done implicitly by the compiler?
Thanks.
If you do a search on CVPixelBufferRef in the Xcode docs, you'll find the following:
typedef CVImageBufferRef CVPixelBufferRef;
So a CVImageBufferRef is a synonym for a CVPixelBufferRef. They are interchangeable.
You are looking at some pretty gnarly code. RosyWriter, and another sample app called "Chromakey" do some pretty low-level processing on pixel buffers. If you're new to iOS development AND new to multimedia you might not want to dig so deep, so fast. It's a bit like a first year medical student trying to perform a heart-lung transplant.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With