Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Applying filter to real time camera preview - Swift

I'm trying to follow the answer given here: https://stackoverflow.com/a/32381052/8422218 to create an app which uses the back facing camera and adds a filter, then displays it on the screen in real time

here is my code:

//
//  ViewController.swift
//  CameraFilter
//

import UIKit
import AVFoundation

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {

    var captureSession = AVCaptureSession()
    var backCamera: AVCaptureDevice?
    var frontCamera: AVCaptureDevice?
    var currentCamera: AVCaptureDevice?

    var photoOutput: AVCapturePhotoOutput?

    var cameraPreviewLayer: AVCaptureVideoPreviewLayer?

    @IBOutlet weak var filteredImage: UIImageView!

    override func viewDidLoad() {
        super.viewDidLoad()

        setupCaptureSession()
        setupDevice()
        setupInputOutput()
        setupCorrectFramerate(currentCamera: currentCamera!) // will default to 30fps unless stated otherwise
        setupPreviewLayer()
        startRunningCaptureSession()
    }

    func setupCaptureSession() {
        // should support anything up to 1920x1080 res, incl. 240fps @ 720p
        captureSession.sessionPreset = AVCaptureSession.Preset.high
    }

    func setupDevice() {
        let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.unspecified)
        let devices = deviceDiscoverySession.devices

        for device in devices {
            if device.position == AVCaptureDevice.Position.back {
                backCamera = device
            }
            else if device.position == AVCaptureDevice.Position.front {
                frontCamera = device
            }
        }

        currentCamera = backCamera
    }

    func setupInputOutput() {
        do {
            let captureDeviceInput = try AVCaptureDeviceInput(device: currentCamera!)
            captureSession.addInput(captureDeviceInput)
            photoOutput?.setPreparedPhotoSettingsArray([AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])], completionHandler: nil)
        } catch {
            print(error)
        }
    }

    func setupCorrectFramerate(currentCamera: AVCaptureDevice) {
        for vFormat in currentCamera.formats {
            //see available types
            //print("\(vFormat) \n")

            var ranges = vFormat.videoSupportedFrameRateRanges as [AVFrameRateRange]
            let frameRates = ranges[0]

            do {
                //set to 240fps - available types are: 30, 60, 120 and 240 and custom
                // lower framerates cause major stuttering
                if frameRates.maxFrameRate == 240 {
                    try currentCamera.lockForConfiguration()
                    currentCamera.activeFormat = vFormat as AVCaptureDevice.Format
                    //for custom framerate set min max activeVideoFrameDuration to whatever you like, e.g. 1 and 180
                    currentCamera.activeVideoMinFrameDuration = frameRates.minFrameDuration
                    currentCamera.activeVideoMaxFrameDuration = frameRates.maxFrameDuration
                }
            }
            catch {
                print("Could not set active format")
                print(error)
            }
        }
    }

    func setupPreviewLayer() {
        cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
        cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
        cameraPreviewLayer?.frame = self.view.frame

        //set preview in background, allows for elements to be placed in the foreground
        self.view.layer.insertSublayer(cameraPreviewLayer!, at: 0)
    }

    func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
        let videoOutput = AVCaptureVideoDataOutput()
        videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main)

        let comicEffect = CIFilter(name: "CIComicEffect")

        let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
        let cameraImage = CIImage(cvImageBuffer: pixelBuffer!)

        comicEffect!.setValue(cameraImage, forKey: kCIInputImageKey)

        //let filteredImage = UIImage(CIImage: comicEffect!.valueForKey(kCIOutputImageKey) as! CIImage!)
        let filteredImage = UIImage(ciImage: comicEffect!.value(forKey: kCIOutputImageKey) as! CIImage!)

        print("made it here")


        DispatchQueue.main.async {
            self.filteredImage.image = filteredImage
        }
    }

    func startRunningCaptureSession() {
        captureSession.startRunning()
        backCamera?.unlockForConfiguration()
    }

    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
        // Dispose of any resources that can be recreated.
    }


}

My storyboard contains a UIImageView that's the size of the entire screen. When I run my application, I can only see the camera preview but not the filter applied to it. Where am I going wrong?

I also found the following repo which contains all of the relevant code I need to create the application. https://github.com/altitudelabs/iOSRealTimeFilterTutorial

It's written in Objective-C and is quite outdated but I had a go at converting that into Swift code with no success:

//
//  ViewController.swift
//  CameraFilter
//

import UIKit
import AVFoundation
import GLKit

class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {

    var videoPreviewView: GLKView?
    var ciContext: CIContext?
    var eaglContext: EAGLContext?
    var videoPreviewViewBounds = CGRect.zero
    var videoDevice: AVCaptureDevice?

    var captureSession = AVCaptureSession()

    var backCamera: AVCaptureDevice?
    var frontCamera: AVCaptureDevice?
    var currentCamera: AVCaptureDevice?
    var cameraPreviewLayer: AVCaptureVideoPreviewLayer?

    override func viewDidLoad() {
        super.viewDidLoad()
        self.view.backgroundColor = UIColor.clear

        let window: UIView? = (UIApplication.shared.delegate as? AppDelegate)?.window
        eaglContext = EAGLContext(api: .openGLES2)
        videoPreviewView = GLKView(frame: (window?.bounds)!, context: eaglContext!)
        videoPreviewView?.enableSetNeedsDisplay = false

        videoPreviewView?.transform = CGAffineTransform(rotationAngle: CGFloat.pi * 2)
        videoPreviewView?.frame = (window?.bounds)!

        videoPreviewView?.bindDrawable()

        videoPreviewViewBounds = CGRect.zero

        videoPreviewViewBounds.size.width = CGFloat(videoPreviewView!.drawableWidth)
        videoPreviewViewBounds.size.height = CGFloat(videoPreviewView!.drawableHeight)

        ciContext = CIContext(eaglContext: eaglContext!, options: [kCIContextWorkingColorSpace: NSNull()])

        setupDevice()

        setupCaptureSession()
        setupInputOutput()
        setupCorrectFramerate(currentCamera: currentCamera!)
        setupPreviewLayer()



    }

    func setupCaptureSession() {
        // should support anything up to 1920x1080 res, incl. 240fps @ 720p
        captureSession.sessionPreset = AVCaptureSession.Preset.high
    }

    func setupPreviewLayer() {
        cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
        cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
        cameraPreviewLayer?.frame = self.view.frame

        //set preview in background, allows for elements to be placed in the foreground
        self.view.layer.insertSublayer(cameraPreviewLayer!, at: 0)
    }

    func setupInputOutput() {
        do {
            let captureDeviceInput = try AVCaptureDeviceInput(device: currentCamera!)
            captureSession.addInput(captureDeviceInput)

            let videoDataOutput = AVCaptureVideoDataOutput()
            videoDataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as String): kCVPixelFormatType_32BGRA]

            let captureSessionQueue = DispatchQueue(label: "capture_session_queue")
            videoDataOutput.setSampleBufferDelegate(self, queue: captureSessionQueue)

            videoDataOutput.alwaysDiscardsLateVideoFrames = true

            captureSession.addOutput(videoDataOutput)
            captureSession.beginConfiguration()
            captureSession.commitConfiguration()
            captureSession.startRunning()
                    print("here")

        } catch {
            print(error)
        }
    }

    func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

        let imageBuffer: CVImageBuffer? = CMSampleBufferGetImageBuffer(sampleBuffer)
        let sourceImage = CIImage(cvPixelBuffer: imageBuffer!, options: nil)
        let sourceExtent: CGRect = sourceImage.extent

        let comicEffect = CIFilter(name: "CIComicEffect")

        let filteredImage: CIImage? = comicEffect?.outputImage

        let sourceAspect: CGFloat = sourceExtent.size.width / sourceExtent.size.height
        let previewAspect: CGFloat = videoPreviewViewBounds.size.width / videoPreviewViewBounds.size.height
        // we want to maintain the aspect radio of the screen size, so we clip the video image
        var drawRect: CGRect = sourceExtent
        if sourceAspect > previewAspect {
            // use full height of the video image, and center crop the width
            drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0
            drawRect.size.width = drawRect.size.height * previewAspect
        }
        else {
            // use full width of the video image, and center crop the height
            drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0
            drawRect.size.height = drawRect.size.width / previewAspect
        }

        videoPreviewView?.bindDrawable()

        if eaglContext != EAGLContext.current() {
            EAGLContext.setCurrent(eaglContext)
        }

        glClearColor(0.5, 0.5, 0.5, 1.0)
        glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
        // set the blend mode to "source over" so that CI will use that
        glEnable(GLenum(GL_BLEND))
        glBlendFunc(GLenum(GL_ONE), GLenum(GL_ONE_MINUS_SRC_ALPHA))
        if (filteredImage != nil) {
            ciContext?.draw(filteredImage!, in: videoPreviewViewBounds, from: drawRect)
        }

        videoPreviewView?.display()
    }

    func setupDevice() {
        let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.unspecified)
        let devices = deviceDiscoverySession.devices

        for device in devices {
            if device.position == AVCaptureDevice.Position.back {
                backCamera = device
            }
            else if device.position == AVCaptureDevice.Position.front {
                frontCamera = device
            }
        }

        currentCamera = backCamera
    }

    func setupCorrectFramerate(currentCamera: AVCaptureDevice) {
        for vFormat in currentCamera.formats {
            //see available types
            //print("\(vFormat) \n")

            var ranges = vFormat.videoSupportedFrameRateRanges as [AVFrameRateRange]
            let frameRates = ranges[0]

            do {
                //set to 240fps - available types are: 30, 60, 120 and 240 and custom
                // lower framerates cause major stuttering
                if frameRates.maxFrameRate == 240 {
                    try currentCamera.lockForConfiguration()
                    currentCamera.activeFormat = vFormat as AVCaptureDevice.Format
                    //for custom framerate set min max activeVideoFrameDuration to whatever you like, e.g. 1 and 180
                    currentCamera.activeVideoMinFrameDuration = frameRates.minFrameDuration
                    currentCamera.activeVideoMaxFrameDuration = frameRates.maxFrameDuration
                }
            }
            catch {
                print("Could not set active format")
                print(error)
            }
        }
    }


}

I just get a blank screen.

like image 747
Ellen Avatar asked Mar 08 '23 17:03

Ellen


1 Answers

There are a few things wrong with your code on top

You are using a AVCaptureVideoPreviewLayer but this is going to transport pixels capture by the camera directly to the screen, skipping your image processing and CIFilter and is not necessary.

Your conformance to AVCaptureVideoDataOutputSampleBufferDelegate is out of date. func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) is now called func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)

Because you won't be using AVCaptureVideoPreviewLayer you'll need to ask for permission before you'll be able to start getting pixels from the camera. This is typically done in viewDidAppear(_:) Like:

override func viewDidAppear(_ animated: Bool) {
    super.viewDidAppear(animated)
    if AVCaptureDevice.authorizationStatus(for: AVMediaType.video) != .authorized
    {
        AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler:
        { (authorized) in
            DispatchQueue.main.async
            {
                if authorized
                {
                    self.setupInputOutput()
                }
            }
        })
    }
}

Also, if you are supporting rotation you will also need to update the AVCaptureConnection on rotation in your didOutput callback.

After making these changes (full source code) your code worked, producing an image like so:

Screenshot

like image 178
beyowulf Avatar answered Mar 16 '23 10:03

beyowulf