I'm developping an app which requires capturing framebuffer at as much fps as possible. I've already figured out how to force iphone to capture at 60 fps but
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
method is being called only 15 times a second, which means that iPhone downgrades capture output to 15 fps.
Has anybody faced such problem? Is there any possibility to increase capturing frame rate?
Update my code:
camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if([camera isTorchModeSupported:AVCaptureTorchModeOn]) {
[camera lockForConfiguration:nil];
camera.torchMode=AVCaptureTorchModeOn;
[camera unlockForConfiguration];
}
[self configureCameraForHighestFrameRate:camera];
// Create a AVCaptureInput with the camera device
NSError *error=nil;
AVCaptureInput* cameraInput = [[AVCaptureDeviceInput alloc] initWithDevice:camera error:&error];
if (cameraInput == nil) {
NSLog(@"Error to create camera capture:%@",error);
}
// Set the output
AVCaptureVideoDataOutput* videoOutput = [[AVCaptureVideoDataOutput alloc] init];
// create a queue to run the capture on
dispatch_queue_t captureQueue=dispatch_queue_create("captureQueue", NULL);
// setup our delegate
[videoOutput setSampleBufferDelegate:self queue:captureQueue];
// configure the pixel format
videoOutput.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey,
nil];
// Add the input and output
[captureSession addInput:cameraInput];
[captureSession addOutput:videoOutput];
I took configureCameraForHighestFrameRate
method here https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVCaptureDevice_Class/Reference/Reference.html
I am getting samples at 60 fps on the iPhone 5 and 120 fps on the iPhone 5s, both when doing real time motion detection in captureOutput and when saving the frames to a video using AVAssetWriter.
You have to set thew AVCaptureSession to a format that supports 60 fps:
AVsession = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *capInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if (capInput) [AVsession addInput:capInput];
for(AVCaptureDeviceFormat *vFormat in [videoDevice formats] )
{
CMFormatDescriptionRef description= vFormat.formatDescription;
float maxrate=((AVFrameRateRange*)[vFormat.videoSupportedFrameRateRanges objectAtIndex:0]).maxFrameRate;
if(maxrate>59 && CMFormatDescriptionGetMediaSubType(description)==kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)
{
if ( YES == [videoDevice lockForConfiguration:NULL] )
{
videoDevice.activeFormat = vFormat;
[videoDevice setActiveVideoMinFrameDuration:CMTimeMake(10,600)];
[videoDevice setActiveVideoMaxFrameDuration:CMTimeMake(10,600)];
[videoDevice unlockForConfiguration];
NSLog(@"formats %@ %@ %@",vFormat.mediaType,vFormat.formatDescription,vFormat.videoSupportedFrameRateRanges);
}
}
}
prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: AVsession];
prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.view.layer addSublayer: prevLayer];
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
dispatch_queue_t videoQueue = dispatch_queue_create("videoQueue", NULL);
[videoOut setSampleBufferDelegate:self queue:videoQueue];
videoOut.videoSettings = @{(id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA)};
videoOut.alwaysDiscardsLateVideoFrames=YES;
if (videoOut)
{
[AVsession addOutput:videoOut];
videoConnection = [videoOut connectionWithMediaType:AVMediaTypeVideo];
}
Two other comment if you want to write to a file using AVAssetWriter. Don't use the pixelAdaptor, just ad the samples with
[videoWriterInput appendSampleBuffer:sampleBuffer]
Secondly when setting up the assetwriter use
[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings
sourceFormatHint:formatDescription];
The sourceFormatHint makes a difference in writing speed.
I have developed the same function for Swift 2.0. I post here the code for who could need it:
// Set your desired frame rate
func setupCamera(maxFpsDesired: Double = 120) {
var captureSession = AVCaptureSession()
captureSession.sessionPreset = AVCaptureSessionPreset1920x1080
let backCamera = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
do{ let input = try AVCaptureDeviceInput(device: backCamera)
captureSession.addInput(input) }
catch { print("Error: can't access camera")
return
}
do {
var finalFormat = AVCaptureDeviceFormat()
var maxFps: Double = 0
for vFormat in backCamera!.formats {
var ranges = vFormat.videoSupportedFrameRateRanges as! [AVFrameRateRange]
let frameRates = ranges[0]
/*
"frameRates.maxFrameRate >= maxFps" select the video format
desired with the highest resolution available, because
the camera formats are ordered; else
"frameRates.maxFrameRate > maxFps" select the first
format available with the desired fps
*/
if frameRates.maxFrameRate >= maxFps && frameRates.maxFrameRate <= maxFpsDesired {
maxFps = frameRates.maxFrameRate
finalFormat = vFormat as! AVCaptureDeviceFormat
}
}
if maxFps != 0 {
let timeValue = Int64(1200.0 / maxFps)
let timeScale: Int64 = 1200
try backCamera!.lockForConfiguration()
backCamera!.activeFormat = finalFormat
backCamera!.activeVideoMinFrameDuration = CMTimeMake(timeValue, timeScale)
backCamera!.activeVideoMaxFrameDuration = CMTimeMake(timeValue, timeScale) backCamera!.focusMode = AVCaptureFocusMode.AutoFocus
backCamera!.unlockForConfiguration()
}
}
catch {
print("Something was wrong")
}
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.alwaysDiscardsLateVideoFrames = true
videoOutput.videoSettings = NSDictionary(object: Int(kCVPixelFormatType_32BGRA),
forKey: kCVPixelBufferPixelFormatTypeKey as String) as [NSObject : AnyObject]
videoOutput.setSampleBufferDelegate(self, queue: dispatch_queue_create("sample buffer delegate", DISPATCH_QUEUE_SERIAL))
if captureSession.canAddOutput(videoOutput){
captureSession.addOutput(videoOutput) }
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
previewLayer.transform = CATransform3DMakeRotation(-1.5708, 0, 0, 1);
previewLayer.frame = self.view.bounds
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
self.view.layer.addSublayer(previewLayer)
captureSession.startRunning()
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With