Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

CVPixelBufferLockBaseAddress why? Capture still image using AVFoundation


I'm writing an iPhone app that creates still images from the camera using AVFoundation. Reading the programming guide I've found a code that does almost I need to do, so I'm trying to "reverse engineering" and understand it.
I'm founding some difficulties to understand the part that converts a CMSampleBuffer into an image.
So here is what I understood and later the code.
The CMSampleBuffer represent a buffer in the memory where the image with additional data is stored. Later I call the function CMSampleBufferGetImageBuffer() to receive a CVImageBuffer back with just the image data.
Now there is a function that I didn't understand and I can only imagine its function: CVPixelBufferLockBaseAddress(imageBuffer, 0); I can't understand if it is a "thread lock" to avoid multiple operation on it or a lock to the address of the buffer to avoid changes during operation(and why should it change?..another frame, aren't data copied in another location?). The rest of the code it's clear to me.
Tried to search on google but still didn't find nothing helpful.
Can someone bring some light?

-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) sampleBuffer{

// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

// Lock the base address of the pixel buffer 
CVPixelBufferLockBaseAddress(imageBuffer, 0); 

void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer); 
size_t height = CVPixelBufferGetHeight(imageBuffer); 

// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, 
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);

// Free up the context and color space
CGContextRelease(context); 
CGColorSpaceRelease(colorSpace);

// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];

// Release the Quartz image
CGImageRelease(quartzImage);

return (image);
}

Thanks, Andrea

like image 447
Andrea Avatar asked Jun 24 '11 13:06

Andrea


1 Answers

The header file says that CVPixelBufferLockBaseAddress makes the memory "accessible". I'm not sure what that means exactly, but if you don't do it, CVPixelBufferGetBaseAddress fails so you'd better do it.

EDIT

Just do it is the short answer. For why consider that image may not live in main memory, it may live in a texture on some GPU somewhere (CoreVideo works on the mac too) or even be in a different format to what you expect, so the pixels you get are actually a copy. Without Lock/Unlock or some kind of Begin/End pair the implementation has no way to know when you've finished with the duplicate pixels so they would effectively be leaked. CVPixelBufferLockBaseAddress simply gives CoreVideo scope information, I wouldn't get too hung up on it.

Yes, they could have simply returned the pixels from CVPixelBufferGetBaseAddress and eliminate CVPixelBufferLockBaseAddress altogether. I don't know why they didn't do that.

like image 58
Rhythmic Fistman Avatar answered Sep 26 '22 17:09

Rhythmic Fistman