Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

how to use AVCaptureSession to read a video from a file?

I am writing an app that does some real-time video processing using an AVCaptureSession with a AVCaptureVideoDataOutput as output and an AVCaptureDeviceInput with the video file (it no longer needs to be in real-time) as input.

Is it possible to use the video file as in input to the AVCaptureSession instead of the camera? If it is not possible, what is the best method to process a video file using video capture of opencv on iOS (either simultaneously or sequentially)?

like image 382
kunal Avatar asked Oct 07 '14 11:10

kunal


People also ask

What is Avcapture in iOS?

An object that configures capture behavior and coordinates the flow of data from input devices to capture outputs. iOS 4.0+ iPadOS 4.0+ macOS 10.7+ Mac Catalyst 14.0+


2 Answers

Since you have access to the raw video file frames (from your AVCaptureVideoDataOutput), you can convert each frame to a cv::Mat object (a opencv matrix, representing an image). Then do your image processing on each individual frame.

Check out https://developer.apple.com/library/ios/qa/qa1702/_index.html for a real time example using the camera; you can convert your UIImage to a cv::Mat using cvMatFromUIImage.

like image 113
Kevin Le Avatar answered Nov 16 '22 04:11

Kevin Le


So it turns out it's not too difficult to do. The basic outline is:

  1. Create a cv::VideoCapture to read from a file
  2. Create a CALayer to receive and display each frame.
  3. Run a method at a given rate that reads and processes each frame.
  4. Once done processing, convert each cv::Mat to a CGImageRef and display it on the CALayer.

The actual implementation is as follows:

Step 1: Create cv::VideoCapture

std::string filename = "/Path/To/Video/File";
capture = cv::VideoCapture(filename);
if(!capture.isOpened()) NSLog(@"Could not open file.mov");

Step 2: Create the Output CALayer

self.previewLayer = [CALayer layer];
self.previewLayer.frame = CGRectMake(0, 0, width, height);
[self.view.layer addSublayer:self.previewLayer];

Step 3: Create Processing Loop w/ GCD

int kFPS = 30;

dispatch_queue_t queue = dispatch_queue_create("timer", 0);
self.timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue);
dispatch_source_set_timer(self.timer, dispatch_walltime(NULL, 0), (1/kFPS) * NSEC_PER_SEC, (0.5/kFPS) * NSEC_PER_SEC);

dispatch_source_set_event_handler(self.timer, ^{
    dispatch_async(dispatch_get_main_queue(), ^{
        [self processNextFrame];
    });
});

dispatch_resume(self.timer);

Step 4: Processing Method

-(void)processNextFrame {
    /* Read */
    cv::Mat frame;
    capture.read(frame);

    /* Process */
    ...

    /* Convert and Output to CALayer*/
    cvtColor(frame, frame, CV_BGR2RGB);
    NSData *data = [NSData dataWithBytes:frame.data
                              length:frame.elemSize()*frame.total()];

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGBitmapInfo bitmapInfo = (frame.elemSize() == 3) ? kCGImageAlphaNone : kCGImageAlphaNoneSkipFirst;
    CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef) data);

    CGImageRef imageRef = CGImageCreate(frame.cols,
                                        frame.rows,
                                        8,
                                        8 * frame.elemSize(),
                                        frame.step[0],
                                        colorSpace,
                                        bitmapInfo,
                                        provider,
                                        NULL,
                                        false,
                                        kCGRenderingIntentDefault);

    self.previewLayer.contents = (__bridge id)imageRef;

    CGImageRelease(imageRef);
    CGColorSpaceRelease(colorSpace);
}
like image 32
pasawaya Avatar answered Nov 16 '22 04:11

pasawaya