Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is an iPhone XS getting worse CPU performance when using the camera live than an iPhone 6S Plus?

I'm using live camera output to update a CIImage on a MTKView. My main issue is that I have a large, negative performance difference where an older iPhone gets better CPU performance than a newer one, despite all their settings I've come across are the same.

This is a lengthy post, but I decided to include these details since they could be important to the cause of this problem. Please let me know what else I can include.

Below, I have my captureOutput function with two debug bools that I can turn on and off while running. I used this to try to determine the cause of my issue.

applyLiveFilter - bool whether or not to manipulate the CIImage with a CIFilter.

updateMetalView - bool whether or not to update the MTKView's CIImage.

// live output from camera
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){

    /*              
     Create CIImage from camera.
     Here I save a few percent of CPU by using a function 
     to convert a sampleBuffer to a Metal texture, but
     whether I use this or the commented out code 
     (without captureOutputMTLOptions) does not have 
     significant impact. 
    */

    guard let texture:MTLTexture = convertToMTLTexture(sampleBuffer: sampleBuffer) else{
        return
    }

    var cameraImage:CIImage = CIImage(mtlTexture: texture, options: captureOutputMTLOptions)!

    var transform: CGAffineTransform = .identity

    transform = transform.scaledBy(x: 1, y: -1)

    transform = transform.translatedBy(x: 0, y: -cameraImage.extent.height)

    cameraImage = cameraImage.transformed(by: transform)

    /*
    // old non-Metal way of getting the ciimage from the cvPixelBuffer
    guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else
    {
        return
    }

    var cameraImage:CIImage = CIImage(cvPixelBuffer: pixelBuffer)
    */

    var orientation = UIImage.Orientation.right

    if(isFrontCamera){
        orientation = UIImage.Orientation.leftMirrored
    }

    // apply filter to camera image
    if debug_applyLiveFilter {
        cameraImage = self.applyFilterAndReturnImage(ciImage: cameraImage, orientation: orientation, currentCameraRes:currentCameraRes!)
    }

    DispatchQueue.main.async(){
        if debug_updateMetalView {
            self.MTLCaptureView!.image = cameraImage
        }
    }

}

Below is a chart of results between both phones toggling the different combinations of bools discussed above:

bool chart results for both iPhones

Even without the Metal view's CIIMage updating and no filters being applied, the iPhone XS's CPU is 2% greater than iPhone 6S Plus's, which isn't a significant overhead, but makes me suspect that somehow how the camera is capturing is different between the devices.

  • My AVCaptureSession's preset is set identically between both phones (AVCaptureSession.Preset.hd1280x720)

  • The CIImage created from captureOutput is the same size (extent) between both phones.

Are there any settings I need to set manually between these two phones AVCaptureDevice's settings, including activeFormat properties, to make them the same between devices?

The settings I have now are:

if let captureDevice = AVCaptureDevice.default(for:AVMediaType.video) {
    do {
        try captureDevice.lockForConfiguration()
            captureDevice.isSubjectAreaChangeMonitoringEnabled = true
            captureDevice.focusMode = AVCaptureDevice.FocusMode.continuousAutoFocus
            captureDevice.exposureMode = AVCaptureDevice.ExposureMode.continuousAutoExposure
        captureDevice.unlockForConfiguration()

    } catch {
        // Handle errors here
        print("There was an error focusing the device's camera")
    }
}

My MTKView is based off code written by Simon Gladman, with some edits for performance and to scale the render before it is scaled up to the width of the screen using Core Animation suggested by Apple.

class MetalImageView: MTKView
{
    let colorSpace = CGColorSpaceCreateDeviceRGB()

    var textureCache: CVMetalTextureCache?

    var sourceTexture: MTLTexture!

    lazy var commandQueue: MTLCommandQueue =
        {
            [unowned self] in

            return self.device!.makeCommandQueue()
            }()!

    lazy var ciContext: CIContext =
        {
            [unowned self] in

            return CIContext(mtlDevice: self.device!)
            }()


    override init(frame frameRect: CGRect, device: MTLDevice?)
    {
        super.init(frame: frameRect,
                   device: device ?? MTLCreateSystemDefaultDevice())



        if super.device == nil
        {
            fatalError("Device doesn't support Metal")
        }

        CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, self.device!, nil, &textureCache)

        framebufferOnly = false

        enableSetNeedsDisplay = true

        isPaused = true

        preferredFramesPerSecond = 30
    }

    required init(coder: NSCoder)
    {
        fatalError("init(coder:) has not been implemented")
    }

    // The image to display
    var image: CIImage?
    {
        didSet
        {
            setNeedsDisplay()
        }
    }

    override func draw(_ rect: CGRect)
    {
        guard var
            image = image,
            let targetTexture:MTLTexture = currentDrawable?.texture else
        {
            return
        }

        let commandBuffer = commandQueue.makeCommandBuffer()

        let customDrawableSize:CGSize = drawableSize

        let bounds = CGRect(origin: CGPoint.zero, size: customDrawableSize)

        let originX = image.extent.origin.x
        let originY = image.extent.origin.y

        let scaleX = customDrawableSize.width / image.extent.width
        let scaleY = customDrawableSize.height / image.extent.height

        let scale = min(scaleX*IVScaleFactor, scaleY*IVScaleFactor)
        image = image
            .transformed(by: CGAffineTransform(translationX: -originX, y: -originY))
            .transformed(by: CGAffineTransform(scaleX: scale, y: scale))


        ciContext.render(image,
                         to: targetTexture,
                         commandBuffer: commandBuffer,
                         bounds: bounds,
                         colorSpace: colorSpace)


        commandBuffer?.present(currentDrawable!)

        commandBuffer?.commit()

    }

}

My AVCaptureSession (captureSession) and AVCaptureVideoDataOutput (videoOutput) are setup below:

func setupCameraAndMic(){
    let backCamera = AVCaptureDevice.default(for:AVMediaType.video)

    var error: NSError?
    var videoInput: AVCaptureDeviceInput!
    do {
        videoInput = try AVCaptureDeviceInput(device: backCamera!)
    } catch let error1 as NSError {
        error = error1
        videoInput = nil
        print(error!.localizedDescription)
    }

    if error == nil &&
        captureSession!.canAddInput(videoInput) {

        guard CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, MetalDevice, nil, &textureCache) == kCVReturnSuccess else {
            print("Error: could not create a texture cache")
            return
        }

        captureSession!.addInput(videoInput)            

        setDeviceFrameRateForCurrentFilter(device:backCamera)

        stillImageOutput = AVCapturePhotoOutput()

        if captureSession!.canAddOutput(stillImageOutput!) {
            captureSession!.addOutput(stillImageOutput!)

            let q = DispatchQueue(label: "sample buffer delegate", qos: .default)
            videoOutput.setSampleBufferDelegate(self, queue: q)
            videoOutput.videoSettings = [
                kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String: NSNumber(value: kCVPixelFormatType_32BGRA),
                kCVPixelBufferMetalCompatibilityKey as String: true
            ]
            videoOutput.alwaysDiscardsLateVideoFrames = true

            if captureSession!.canAddOutput(videoOutput){
                captureSession!.addOutput(videoOutput)
            }

            captureSession!.startRunning()

        }

    }

    setDefaultFocusAndExposure()
}

The video and mic are recorded on two separate streams. Details on the microphone and recording video have been left out since my focus is performance of live camera output.


UPDATE - I have a simplified test project on GitHub that makes it a lot easier to test the problem I'm having: https://github.com/PunchyBass/Live-Filter-test-project

like image 230
Chewie The Chorkie Avatar asked Jan 29 '19 22:01

Chewie The Chorkie


People also ask

Is iPhone 13 bigger than XS?

It's essentially the same phone as the iPhone XS except it comes with a single-lens camera, a larger 6.1-inch LCD screen instead of a 5.8-inch OLED display, and brighter color options. The iPhone XS might be fine for those who don't care about having the fastest processor or sharpest camera.


1 Answers

From the top of my mind, you are not comparing pears with pears, even if you are running with the 2.49 GHz of A12 against 1.85 GHz of A9, the differences between the cameras are also huge, even if you use them with the same parameters there are several features from XS's camera that require more CPU resources (dual camera, stabilization, smart HDR, etc).

Sorry for the sources, I tried to find metrics of the CPU cost of those features, but I couldn't find it, unfortunately for your needs, that information is not relevant for marketing, when they are selling it as the best camera ever for an smartphone.

They are selling it as the best processor as well, we don't know what would happen using the XS camera with an A9 processor, it would probably crash, we will never know...

PS.... Your metrics are for the whole processor or for the used core? For the whole processor, you also need to consider other tasks that the devices can be executing, for the single core, is 21% of 200% against 39% of 600%

like image 128
Daniel Brughera Avatar answered Sep 21 '22 23:09

Daniel Brughera