Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I capture QR Code data in specific area of AVCaptureVideoPreviewLayer using Swift?

I am creating an iPad app and one of it's features is scanning QR codes. I have the QR scanning part working, but the issue I have is that the iPad screen is very large and I will be scanning small QR codes of of a sheet of paper with many QR codes visible at once. I want to designate a smaller area of the display to be the only area that can actually capture a QR code so it is easier for the user to scan the specific QR code they want.

I currently have made a temporary UIView with red borders that is centered on the page as an example of where I will want the user to scan the QR codes. It looks like this:

I have looked all over to find an answer to how I can target a specific region of the AVCaptureVideoPreviewLayer to collect the QR code data, and what I have found is suggestions to use "rectOfInterest" with AVCaptureMetadataOutput. I have attempted to do that, but when I set rectOfInterest to the same coordinates and size as those I use for my UIView that shows up correctly, I can no longer scan/recognize any QR codes. Can someone please tell me why the scannable area does not match the location of the UIView that is seen and how can I get the rectOfInterest to be within the red borders I have added to the screen?

Here is the code for the scan function I am currently using:

func startScan() {
    // Get an instance of the AVCaptureDevice class to initialize a device object and provide the video
    // as the media type parameter.
    let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)

    // Get an instance of the AVCaptureDeviceInput class using the previous device object.
    var error:NSError?
    let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &error)

    if (error != nil) {
        // If any error occurs, simply log the description of it and don't continue any more.
        println("\(error?.localizedDescription)")
        return
    }

    // Initialize the captureSession object.
    captureSession = AVCaptureSession()
    // Set the input device on the capture session.
    captureSession?.addInput(input as! AVCaptureInput)

    // Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session.
    let captureMetadataOutput = AVCaptureMetadataOutput()
    captureSession?.addOutput(captureMetadataOutput)

    // calculate a centered square rectangle with red border
    let size = 300
    let screenWidth = self.view.frame.size.width
    let xPos = (CGFloat(screenWidth) / CGFloat(2)) - (CGFloat(size) / CGFloat(2))
    let scanRect = CGRect(x: Int(xPos), y: 150, width: size, height: size)

    // create UIView that will server as a red square to indicate where to place QRCode for scanning
    scanAreaView = UIView()
    scanAreaView?.layer.borderColor = UIColor.redColor().CGColor
    scanAreaView?.layer.borderWidth = 4
    scanAreaView?.frame = scanRect

    // Set delegate and use the default dispatch queue to execute the call back
    captureMetadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
    captureMetadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode]
    captureMetadataOutput.rectOfInterest = scanRect


    // Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
    videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
    videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
    videoPreviewLayer?.frame = view.layer.bounds
    view.layer.addSublayer(videoPreviewLayer)

    // Start video capture.
    captureSession?.startRunning()

    // Initialize QR Code Frame to highlight the QR code
    qrCodeFrameView = UIView()
    qrCodeFrameView?.layer.borderColor = UIColor.greenColor().CGColor
    qrCodeFrameView?.layer.borderWidth = 2
    view.addSubview(qrCodeFrameView!)
    view.bringSubviewToFront(qrCodeFrameView!)

    // Add a button that will be used to close out of the scan view
    videoBtn.setTitle("Close", forState: .Normal)
    videoBtn.setTitleColor(UIColor.blackColor(), forState: .Normal)
    videoBtn.backgroundColor = UIColor.grayColor()
    videoBtn.layer.cornerRadius = 5.0;
    videoBtn.frame = CGRectMake(10, 30, 70, 45)
    videoBtn.addTarget(self, action: "pressClose:", forControlEvents: .TouchUpInside)
    view.addSubview(videoBtn)

    view.addSubview(scanAreaView!)

}

Update The reason I do not think this is a duplicate is because the other post referenced is in Objective-C and my code is in Swift. For those of us that are new to iOS it is not as easy to translate the two. Also, the referenced post's answer does not show the actual update made in the code that resolved his issue. He left a good explanation about having to use the metadataOutputRectOfInterestForRect method to convert the rectangle coordinates, but I still cannot seem to get this method to work, as it is unclear to me how this should work without an example.

like image 535
The_Dude Avatar asked Aug 30 '15 22:08

The_Dude


People also ask

What is quiet zone in QR code?

The QR Code symbol area requires a margin or "quiet zone" around it to be used. The margin is a clear area around a symbol where nothing is printed. QR Code requires a four-module wide margin at all sides of a symbol.

How to get data from barcodes and QRcode using the capturesession?

The output has a little bit more code which is what will allow us to get the data from the Barcodes and QRCode that we will scan. To add the deviceInput to the captureSession add the following in the do just under the previous two variables like this: Now we can do the same with the output. You will see that there is more code in the output.

What is an avcapturevideopreviewlayer?

The preview layer is exactly what it says. It is a layer that will show us a preview, in other words it will show us the video stream from the camera. This is a very easy thing to setup. All that we need is a view and a captureSession to be passed through as arguments, and then we will be able to return an AVCaptureVideoPreviewLayer.

What is a QR code and how does it work?

QR codes are a convenient way to store all kinds of data in a small space and are easy to generate and use. But to insert the data contained in a single QR Code in different fields of a program, it’s required an additional step.

How to extract data from a QR code in Excel?

if you want to extract the data from the QR Code into the cells of a spreadsheet, like this: you’ll need to split the initial string using as separator the character. To do that follow this procedure: Click the settings icon (up-right side) Click the component that is in the Output template field and enable the Skip output option


3 Answers

After fighting with the metedataOutputRectOfInterestForRect method all morning, I got tired of it and decided to write my own conversion.

func convertRectOfInterest(rect: CGRect) -> CGRect {
    let screenRect = self.view.frame
    let screenWidth = screenRect.width
    let screenHeight = screenRect.height
    let newX = 1 / (screenWidth / rect.minX)
    let newY = 1 / (screenHeight / rect.minY)
    let newWidth = 1 / (screenWidth / rect.width)
    let newHeight = 1 / (screenHeight / rect.height)
    return CGRect(x: newX, y: newY, width: newWidth, height: newHeight)
}

Note: I have an image view with a square to show the user where to scan, be sure to use the imageView.frame and not imageView.bounds in order to get the correct location on the screen.

This has been working successfully for me.

like image 75
Sean Calkins Avatar answered Sep 28 '22 09:09

Sean Calkins


let metadataOutput = AVCaptureMetadataOutput()
metadataOutput.rectOfInterest = convertRectOfInterest(rect: scanRect)

After reviewing other source(https://www.jianshu.com/p/8bb3d8cb224e), the convertRectOfInterest function has a slight mistake, the return field should be:

return CGRect(x: newY, y: newX, width: newHeight, height: newWidth) 

where x and y, Width and Height input should be interchanged to get it working.

like image 43
Edwin Chau Avatar answered Sep 28 '22 10:09

Edwin Chau


You need to convert the rect represented in the UIView's coordinates into the coordinate system of the AVCaptureVideoPreviewLayer:

captureMetadataOutput.rectOfInterest = videoPreviewLayer.metadataOutputRectConverted(fromLayerRect: scanRect)

For more info: https://stackoverflow.com/a/55778152/6898849

like image 36
ramzesenok Avatar answered Sep 28 '22 09:09

ramzesenok