I currently have a video camera set up with an AVCaptureVideoDataOutput
whose sample buffer delegate is implemented as such:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSArray *detectedFaces = [self detectFacesFromSampleBuffer:sampleBuffer];
[self animateViewsForFaces:detectedFaces];
}
The sample buffer is processed and if any faces are detected, their bounds are shown as views over an AVCaptureVideoPreviewLayer
that's displaying the live video output (rectangles over the faces). The views are animated so that they move smoothly between face detects. Is it possible to somehow record what's shown in the preview layer and merge it with the animated UIView
s that are overlaying it, the end result being a video file?
Generally, you can use low-level approach to make a video stream, then write it to a file. I'm not an expert with video formats, codecs and so on, but approach is:
— Set up an CADisplayLink for getting fired callback every frame the screen redraws. Maybe good decision is to setup frame interval to 2 to reduce target video frame rate to ~30 fps.
— Each time screen redraws take a snapshot of preview layer and overlay.
— Process collected images: zip each two images of one frame then make a video stream from the sequence of merged frames. I assume, iOS has built-in tools for more or less simple way to do this.
Of course, resolution and quality constrained to the layers' parameters. If you need raw video stream from the camera, you should capture this stream and then draw your overlay data directly in the video frames that you captured.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With