I'm trying to do the tutorial found here for ios video processing with openCv framework.
I've successfully loaded the ios openCv framework to my project - but there seems to be a mismatch between my framework and the one presented in the tutorial and I am hoping someone can help me.
OpenCv uses cv::Mat
type for representing images. When using AVfoundation delegation to process images from the camera - I will need to convert all the CMSampleBufferRef
to that type.
It seems that the openCV framework presented in the tutorial provides a library called using
#import <opencv2/highgui/cap_ios.h>
with a new delegate command:
Can anyone point me where I can find this framework or possibly fast conversion between CMSampleBufferRef
and cv::Mat
EDIT
There is a lot of segmentation in the opencv framework (at least for ios). I've downloaded it through various "official" sites and also using tools such as fink and brew using THEIR instructions. I even compared header files that were installed to /usr/local/include/opencv/. They were different each time. When downloading an openCV project - there are various cmake files and conflicting readme files in the same project. I think I was successful in building a good version for IOS with avcapture functionality built in to the framework (with this header <opencv2/highgui/cap_ios.h>
) through this link and then building the library using the python script in the ios directory - using the command python opencv/ios/build_framework.py ios
. I will try and update
Here is the conversion that I use. You lock the pixel buffer, create a cv::Mat, process with the cv::Mat, then unlock the pixel buffer.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
int bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer);
cv::Mat image = cv::Mat(bufferHeight,bufferWidth,CV_8UC4,pixel, bytesPerRow); //put buffer in open cv, no memory copied
//Processing here
//End processing
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
}
The above method does not copy any memory and as such you do not own the memory, pixelBuffer will free it for you. If you want your own copy of the buffer, just do
cv::Mat copied_image = image.clone();
This is the updated version of the code in the previous accepted answer which should work with any iOS device.
Since bufferWidth
is not equal to bytePerRow
at least on iPhone 6 and iPhone 6+, we need to specify the number of byte in each rows as the last argument to the cv::Mat constructor.
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
int bytePerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
unsigned char *pixel = (unsigned char *) CVPixelBufferGetBaseAddress(pixelBuffer);
cv::Mat image = cv::Mat(bufferHeight, bufferWidth, CV_8UC4, pixel, bytePerRow);
// Process you cv::Mat here
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
The code has been tested on my iPhone5, iPhone6 and iPhone6+ running iOS 10.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With