Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to convert a UIImage to a CVPixelBuffer [duplicate]

Tags:

Apple's new CoreML framework has a prediction function that takes a CVPixelBuffer. In order to classify a UIImage a conversion must be made between the two.

Conversion code I got from an Apple Engineer:

1  // image has been defined earlier 2 3     var pixelbuffer: CVPixelBuffer? = nil 4    5     CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_OneComponent8, nil, &pixelbuffer) 6     CVPixelBufferLockBaseAddress(pixelbuffer!, CVPixelBufferLockFlags(rawValue:0)) 7    8     let colorspace = CGColorSpaceCreateDeviceGray() 9     let bitmapContext = CGContext(data: CVPixelBufferGetBaseAddress(pixelbuffer!), width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelbuffer!), space: colorspace, bitmapInfo: 0)! 10   11    bitmapContext.draw(image.cgImage!, in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)) 

This solution is in swift and is for a grayscale image. Changes that must be made depending on the type of image are:

  • Line 5 | kCVPixelFormatType_OneComponent8 to another OSType (kCVPixelFormatType_32ARGB for RGB)
  • Line 8 | colorSpace to another CGColorSpace (CGColorSpaceCreateDeviceRGB for RGB)
  • Line 9 | bitsPerComponent to the number of bits per pixel of memory (32 for RGB)
  • Line 9 | bitmapInfo to a nonzero CGBitmapInfo property (kCGBitmapByteOrderDefault is the default)
like image 842
Ryan Avatar asked Jun 09 '17 15:06

Ryan


People also ask

How to create a data object from a cvpixelbuffer?

Which makes sense since the CMSampleBuffer contains a pixel buffer (a bitmap), not a JPEG. You need to "lock" the data using "CVPixelBufferLockBaseAddress (_:_:)", then access the data using "CVPixelBufferGetBaseAddress (_:)". Then you can create a Data object using the bytes from this base address.

What is a core video pixel buffer?

The Core Video pixel buffer is an image buffer that holds pixels in main memory. In some contexts you have to work with data types of more low lever frameworks. In regard to image and video data, the frameworks Core Video and Core Image serve to process digital image or video data.

How to get a cgimage from a buffer?

UImage is a wrapper around CGImage, thus to get a CGImage you just need to call the method .CGImage. The other ways are also create a CIImage from the buffer (already posted) or use the Accelerate framework, that is probably the fastest but also the hardest.

Why can't I copy the data from one pixel buffer to another?

Make sure you do not use one of the Data "bytesNoCopy" initializers to copy the data, because the buffer pointed to by the base address may not exist persistently. Note that you may have to use "CVPixelBufferGetBaseAddressOfPlane (_:_:)" for each plane, if the pixel buffer is planar, and decide how to combine the planes into a single Data object.


1 Answers

You can take a look at this tutorial https://www.hackingwithswift.com/whats-new-in-ios-11, code is in Swift 4

func buffer(from image: UIImage) -> CVPixelBuffer? {   let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary   var pixelBuffer : CVPixelBuffer?   let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)   guard (status == kCVReturnSuccess) else {     return nil   }    CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))   let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)    let rgbColorSpace = CGColorSpaceCreateDeviceRGB()   let context = CGContext(data: pixelData, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)    context?.translateBy(x: 0, y: image.size.height)   context?.scaleBy(x: 1.0, y: -1.0)    UIGraphicsPushContext(context!)   image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))   UIGraphicsPopContext()   CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))    return pixelBuffer } 
like image 126
onmyway133 Avatar answered Oct 05 '22 11:10

onmyway133