Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Switching AVCaptureSession preset when capturing a photo

My current setup is as follows (based on the ColorTrackingCamera project from Brad Larson):

I'm using a AVCaptureSession set to AVCaptureSessionPreset640x480 for which I let the output run through an OpenGL scene as a texture. This texture is then manipulated by a fragment shader.

I'm in need of this "lower quality" preset because I want to preserve a high framerate when the user is previewing. I then want to switch to a higher quality output when the user captures a still photo.

First I thought I could change the sessionPreset on the AVCaptureSession but this forces the camera to refocus which break usability.

[captureSession beginConfiguration];
captureSession.sessionPreset = AVCaptureSessionPresetPhoto;
[captureSession commitConfiguration];

Currently I'm trying to add a second AVCaptureStillImageOutput to the AVCaptureSession but I'm getting an empty pixelbuffer, so I think I'm kinda stuck.

Here's my session setup code:

...

// Add the video frame output
[captureSession beginConfiguration];

videoOutput = [[AVCaptureVideoDataOutput alloc] init];
[videoOutput setAlwaysDiscardsLateVideoFrames:YES];
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
[videoOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

if ([captureSession canAddOutput:videoOutput])
{
    [captureSession addOutput:videoOutput];
}
else
{
    NSLog(@"Couldn't add video output");
}

[captureSession commitConfiguration];



// Add still output
[captureSession beginConfiguration];
stillOutput = [[AVCaptureStillImageOutput alloc] init];

if([captureSession canAddOutput:stillOutput])
{
    [captureSession addOutput:stillOutput];
}
else
{
    NSLog(@"Couldn't add still output");
}

[captureSession commitConfiguration];



// Start capturing
[captureSession setSessionPreset:AVCaptureSessionPreset640x480];
if(![captureSession isRunning])
{
    [captureSession startRunning];
};

...

And here is my capture method:

- (void)prepareForHighResolutionOutput
{
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in stillOutput.connections) {
        for (AVCaptureInputPort *port in [connection inputPorts]) {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) { break; }
    }

    [stillOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:
     ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
         CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageSampleBuffer);
         CVPixelBufferLockBaseAddress(pixelBuffer, 0);
         int width = CVPixelBufferGetWidth(pixelBuffer);
         int height = CVPixelBufferGetHeight(pixelBuffer);

         NSLog(@"%i x %i", width, height);
         CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
     }];
}

(width and height turn out to be 0)

I've read through the documents of the AVFoundation documentation but it seems I'm not getting something essential.

like image 228
polyclick Avatar asked Aug 21 '12 12:08

polyclick


1 Answers

I found the solution for my specific problem. I hope it can be used as a guide if someone stumbles upon the same problem.

The reason why the framerate dropped significantly had to do with an internal conversion between pixel formats. After setting the pixelformat explicitly the framerate increased.

In my situation, I was creating a BGRA texture with the following method:

// Let Core Video create the OpenGL texture from pixelbuffer
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, videoTextureCache, pixelBuffer, NULL,
                                                            GL_TEXTURE_2D, GL_RGBA, width, height, GL_BGRA,
                                                            GL_UNSIGNED_BYTE, 0, &videoTexture);

So when I setup the AVCaptureStillImageOutput instance I changed my code to:

// Add still output
stillOutput = [[AVCaptureStillImageOutput alloc] init];
[stillOutput setOutputSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];

if([captureSession canAddOutput:stillOutput])
{
    [captureSession addOutput:stillOutput];
}
else
{
    NSLog(@"Couldn't add still output");
}

I hope this helps someone someday ;)

like image 112
polyclick Avatar answered Sep 18 '22 12:09

polyclick