Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Reading a barcode image without using cocoapods or other external API's

I'm trying to use the new Apple Vision API to detect a barcode from an image and return its details. I've successfully detected a QR code and returned a message using the CIDetector. However I can't make this work for 1 dimensional barcodes. Heres an example result:

import UIKit
import Vision

class BarcodeDetector {

    func recognizeBarcode(for source: UIImage,
                            complete: @escaping (UIImage) -> Void) {
        var resultImage = source
        let detectBarcodeRequest = VNDetectBarcodesRequest { (request, error) in
            if error == nil {
                if let results = request.results as? [VNBarcodeObservation] {
                    print("Number of Barcodes found: \(results.count)")
                    if results.count == 0 { print("\r") }

                    var barcodeBoundingRects = [CGRect]()
                    for barcode in results {
                        barcodeBoundingRects.append(barcode.boundingBox)
                        let barcodeType = String(barcode.symbology.rawValue)?.replacingOccurrences(of: "VNBarcodeSymbology", with: "")
                        print("-Barcode Type: \(barcodeType!)")

                        if barcodeType == "QR" {
                            let image = CIImage(image: source)
                            image?.cropping(to: barcode.boundingBox)
                            self.qrCodeDescriptor(qrCode: barcode, qrCodeImage: image!)
                        }
                    }
                    resultImage = self.drawOnImage(source: resultImage, barcodeBoundingRects: barcodeBoundingRects)
                }
            } else {
                print(error!.localizedDescription)
            }
            complete(resultImage)
        }
        let vnImage = VNImageRequestHandler(cgImage: source.cgImage!, options: [:])
        try? vnImage.perform([detectBarcodeRequest])
    }

    private func qrCodeDescriptor(qrCode: VNBarcodeObservation, qrCodeImage: CIImage) {
        if let description = qrCode.barcodeDescriptor as? CIQRCodeDescriptor {
            readQRCode(qrCodeImage: qrCodeImage)
            print(" -Payload: \(description.errorCorrectedPayload)")
            print(" -Mask Pattern: \(description.maskPattern)")
            print(" -Symbol Version: \(description.symbolVersion)\n")
        }
    }

    private func readQRCode(qrCodeImage: CIImage) {
        let detector: CIDetector = CIDetector(ofType: CIDetectorTypeQRCode, context: nil, options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])!
        var qrCodeLink = ""

        let features = detector.features(in: qrCodeImage)
        for feature in features as! [CIQRCodeFeature] {
            if let messageString = feature.messageString {
                qrCodeLink += messageString
            }
        }

        if qrCodeLink == "" {
            print(" -No Code Message")
        } else {
            print(" -Code Message: \(qrCodeLink)")
        }
    }

How can I convert the image into an AVMetadataObject and then read it from there? Or is there a better approach?

like image 732
Wazza Avatar asked Jul 03 '17 11:07

Wazza


1 Answers

Swift 4.1, using the Vision Framework (No 3rd party stuff or Pods)

Try this. It works for QR and for other types (Code39 in this example):

func startDetection() {
   let request = VNDetectBarcodesRequest(completionHandler: self.detectHandler)
   request.symbologies = [VNBarcodeSymbology.code39] // or use .QR, etc
   self.requests = [request]
}

func detectHandler(request: VNRequest, error: Error?) {
    guard let observations = request.results else {
        //print("no result")
        return
    }
    let results = observations.map({$0 as? VNBarcodeObservation})
    for result in results {
          print(result!.payloadStringValue!)
    }
}

And then in:

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
        return
    }       
    var requestOptions:[VNImageOption:Any] = [:]        
    if let camData = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil) {
        requestOptions = [.cameraIntrinsics:camData]
    }
    let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: CGImagePropertyOrientation(rawValue: 6)!, options: requestOptions)
    do {
        try imageRequestHandler.perform(self.requests)
    } catch {
        print(error)
    }
}

The rest of the implementation is the regular AVCaptureDevice and AVCaptureSession stuff. You will also need to conform to AVCaptureVideoDataOutputSampleBufferDelegate

import AVFoundation
import Vision

var captureDevice: AVCaptureDevice!
var session = AVCaptureSession()
var requests = [VNRequest]()

func viewDidLoad() {
    self.setupVideo()
    self.startDetection()
}

func setupVideo() {
    session.sessionPreset = AVCaptureSession.Preset.photo
    captureDevice = AVCaptureDevice.default(for: AVMediaType.video)
    let deviceInput = try! AVCaptureDeviceInput(device: captureDevice!)
    let deviceOutput = AVCaptureVideoDataOutput()
    deviceOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
    deviceOutput.setSampleBufferDelegate(self, queue: DispatchQueue.global(qos: DispatchQoS.QoSClass.default))
    session.addInput(deviceInput)
    session.addOutput(deviceOutput)
    let imageLayer = AVCaptureVideoPreviewLayer(session: session)
    imageLayer.frame = imageView.bounds
    imageView.layer.addSublayer(imageLayer)
    session.startRunning()
}
like image 136
eharo2 Avatar answered Nov 15 '22 04:11

eharo2