I had starter project that used AVFoundation
to set up the Camera and it worked perfectly. Now I needed to convert the Camera mechanism into GPUImage
. I'm using the same focus and exposure method in both projects (which worked prefect at the AVFoundation
project) but at the GPUImage
project it doesn't focus properly and the always wrong.
Don't mind the applies filter it's the same over all of them
Sample : At the top right of the screen you can see the lamb. This is how it get focused + exposure.
Set up GPU :
stillCamera = GPUImageStillCamera(sessionPreset: AVCaptureSessionPreset640x480, cameraPosition: .Front)
CorrectPosition = AVCaptureDevicePosition.Front
stillCamera!.outputImageOrientation = .Portrait;
stillCamera?.horizontallyMirrorFrontFacingCamera = true
filter = GPUImageFilter()
stillCamera?.addTarget(filter)
filter?.addTarget(self.view as! GPUImageView)
(self.view as! GPUImageView).fillMode = GPUImageFillModeType.init(2)
TouchBegan method:
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
var tap : CGPoint!
if let touch = touches.first as UITouch! {
tap = touch.locationInView(self.view)
}
let device: AVCaptureDevice! = self.stillCamera?.inputCamera!
var error: NSError? = nil
do {
try device.lockForConfiguration()
if device.focusPointOfInterestSupported && device.isFocusModeSupported(AVCaptureFocusMode.AutoFocus){
device.focusMode = AVCaptureFocusMode.AutoFocus
device.focusPointOfInterest = tap
}
if device.exposurePointOfInterestSupported && device.isExposureModeSupported(AVCaptureExposureMode.AutoExpose){
device.exposurePointOfInterest = tap
device.exposureMode = AVCaptureExposureMode.AutoExpose
}
device.subjectAreaChangeMonitoringEnabled = monitorSubjectAreaChange
device.unlockForConfiguration()
} catch let error1 as NSError {
error = error1
print(error)
} catch {
fatalError()
}
}
Any ideas?
The issue you are probably encountering is the device.focusPointOfInterest
's x
and y
need to be in [0;1]
range, where the point (0,0)
is the left-bottom corner of camera and the (1,1)
is top-right, while you are passing the coordinates of tap in view's frame coordinate system.
The only thing you need to do is to convert tap's coordinates into your camera points. Note, however, that camera can have different fill modes.
Here is how I am doing conversion (sorry for Objective-C code, but there is mostly simple math):
CGPoint tapPoint = [gestureRecognizer locationInView:cameraView];
CGPoint pointOfInterest = [HBFocusUtils convertToPointOfInterestFromViewCoordinates:tapPoint inFrame:cameraView.bounds withOrientation:self.currentOrientation andFillMode:cameraView.fillMode mirrored:currentVideoCamera == frontVideoCamera];
[HBFocusUtils setFocus:pointOfInterest forDevice:currentVideoCamera.inputCamera];
and the methods' implementation:
@implementation HBFocusUtils
+ (CGPoint)convertToPointOfInterestFromViewCoordinates:(CGPoint)viewCoordinates inFrame:(CGRect)frame withOrientation:(UIDeviceOrientation)orientation andFillMode:(GPUImageFillModeType)fillMode mirrored:(BOOL)mirrored;
{
CGSize frameSize = frame.size;
CGPoint pointOfInterest = CGPointMake(0.5, 0.5);
if (mirrored)
{
viewCoordinates.x = frameSize.width - viewCoordinates.x;
}
if (fillMode == kGPUImageFillModeStretch) {
pointOfInterest = CGPointMake(viewCoordinates.y / frameSize.height, 1.f - (viewCoordinates.x / frameSize.width));
} else {
CGSize apertureSize = CGSizeMake(CGRectGetHeight(frame), CGRectGetWidth(frame));
if (!CGSizeEqualToSize(apertureSize, CGSizeZero)) {
CGPoint point = viewCoordinates;
CGFloat apertureRatio = apertureSize.height / apertureSize.width;
CGFloat viewRatio = frameSize.width / frameSize.height;
CGFloat xc = .5f;
CGFloat yc = .5f;
if (fillMode == kGPUImageFillModePreserveAspectRatio) {
if (viewRatio > apertureRatio) {
CGFloat y2 = frameSize.height;
CGFloat x2 = frameSize.height * apertureRatio;
CGFloat x1 = frameSize.width;
CGFloat blackBar = (x1 - x2) / 2;
if (point.x >= blackBar && point.x <= blackBar + x2) {
xc = point.y / y2;
yc = 1.f - ((point.x - blackBar) / x2);
}
} else {
CGFloat y2 = frameSize.width / apertureRatio;
CGFloat y1 = frameSize.height;
CGFloat x2 = frameSize.width;
CGFloat blackBar = (y1 - y2) / 2;
if (point.y >= blackBar && point.y <= blackBar + y2) {
xc = ((point.y - blackBar) / y2);
yc = 1.f - (point.x / x2);
}
}
} else if (fillMode == kGPUImageFillModePreserveAspectRatioAndFill) {
if (viewRatio > apertureRatio) {
CGFloat y2 = apertureSize.width * (frameSize.width / apertureSize.height);
xc = (point.y + ((y2 - frameSize.height) / 2.f)) / y2;
yc = (frameSize.width - point.x) / frameSize.width;
} else {
CGFloat x2 = apertureSize.height * (frameSize.height / apertureSize.width);
yc = 1.f - ((point.x + ((x2 - frameSize.width) / 2)) / x2);
xc = point.y / frameSize.height;
}
}
pointOfInterest = CGPointMake(xc, yc);
}
}
return pointOfInterest;
}
+ (void)setFocus:(CGPoint)focus forDevice:(AVCaptureDevice *)device
{
if ([device isFocusPointOfInterestSupported] && [device isFocusModeSupported:AVCaptureFocusModeAutoFocus])
{
NSError *error;
if ([device lockForConfiguration:&error])
{
[device setFocusPointOfInterest:focus];
[device setFocusMode:AVCaptureFocusModeAutoFocus];
[device unlockForConfiguration];
}
}
if ([device isExposurePointOfInterestSupported] && [device isExposureModeSupported:AVCaptureExposureModeAutoExpose])
{
NSError *error;
if ([device lockForConfiguration:&error])
{
[device setExposurePointOfInterest:focus];
[device setExposureMode:AVCaptureExposureModeAutoExpose];
[device unlockForConfiguration];
}
}
}
@end
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With