I'm trying to darken UIImage by grabbing the CGImage, getting each pixel and subtracting 0xa from it, then saving each pixel to a new buffer. But when I try to load that buffer back as an image, the function [to create a CGImage] returns nil. This means I must have done something wrong (I wouldn't be surprised) in my code. I expect it has something to do with the buffer being improperly formatted or something. Can somebody familiar with Core Graphics help me spot the error?
var provider = CGImageGetDataProvider(imageArray[imageNumber]?.CGImage) //Get data provider for image in an array at index No. imageNumber
let data = CGDataProviderCopyData(provider)
var buffer = [Byte](count: CFDataGetLength(data), repeatedValue: 0) //create buffer for image data
CFDataGetBytes(data, CFRangeMake(0, CFDataGetLength(data)), &buffer) //load the image's bytes into buffer
var newBuffer = [Byte](count:buffer.count, repeatedValue: 0) //Going to make some changes, need a place to save new image
var index = 0
for aByte in buffer {
if aByte > 0xa && aByte != 0xff {
newBuffer[index] = (aByte - 0xa) //subtract 0xa from buffer, where possible
}
else{
newBuffer[index] = (0xff) //I *think* there is no alpha channel, but every fourth byte in buffer is 0xff
}
index += 1
}
var coreGraphicsImage = CGImageCreateWithJPEGDataProvider(CGDataProviderCreateWithCFData( CFDataCreate(kCFAllocatorDefault, newBuffer, newBuffer.count)), nil, true, kCGRenderingIntentDefault) //create CGimage from newBuffer.RETURNS NIL!
let myImage = UIImage(CGImage: coreGraphicsImage) //also nil
imageView.image = myImage
A couple of thoughts:
Before you go writing your own image processing routine, you might consider applying one of the Core Image filters. It might make your life much easier, and it may give you more refined results. Just reducing each channel by some fixed number will introduce distortion that you weren't expecting (e.g. color shifts, saturation changes, etc.).
If you were going to do this, I'd be wary about just grabbing the data provider and manipulating it as is. You'd probably want to introduce all sorts of conditional logic for the nature of the image's provider (bits per channel, ARGB vs RGBA, etc.). If you look at Apple's example in Q&A #1509, you can instead retrieve a pixel buffer of a predetermined format (in their example ARGB, 8 bits per component, four bytes per pixel.
This example is dated, but it shows how to create a context of a predetermined format and then draw an image into that context. You can then manipulate that data, and create a new image using this predetermined format using your own provider rather than the JPEG data provider.
The most significant issue in your code sample is that you are trying to use CGImageCreateWithJPEGDataProvider, which expects a "data provider supplying JPEG-encoded data." But your provider probably isn't JPEG-encoded, so it's going to fail. If you're going to use the data in the format of the original image's provider, then you have to create a new image using CGImageCreate (manually supplying the width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, bitmapInfo, etc.).
There are some less serious problems with your routine:
You note that you're seeing every fourth byte is a 0xff. In answer to your question in your code comments, that is undoubtedly the alpha channel. (You could confirm this by examining the CGBitmapInfo of the original CGImageRef.) You might not be using the alpha channel, but it's clearly there.
Your routine, if the channel's value was less than 0x0a, is setting it to 0xff. That's clearly not your intent (e.g. if the pixel was black, you'll make it white!). You should check that logic.
In my tests, this method of iterating/manipulating the Byte array was very slow. I'm not entirely sure why that is, but if you manipulate byte buffer directly, it is much faster.
So, below, please find routine that creates context of predetermined format (RGBA, 8 bits per component, etc), manipulates it (I'm converting to B&W though you can do whatever you want), and creates new image from that. So, in Swift 2:
func blackAndWhiteImage(image: UIImage) -> UIImage? {
// get information about image
let imageref = image.CGImage
let width = CGImageGetWidth(imageref)
let height = CGImageGetHeight(imageref)
// create new bitmap context
let bitsPerComponent = 8
let bytesPerPixel = 4
let bytesPerRow = width * bytesPerPixel
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = Pixel.bitmapInfo
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo)
// draw image to context
let rect = CGRectMake(0, 0, CGFloat(width), CGFloat(height))
CGContextDrawImage(context, rect, imageref)
// manipulate binary data
let pixels = UnsafeMutablePointer<Pixel>(CGBitmapContextGetData(context))
for row in 0 ..< height {
for col in 0 ..< width {
let offset = Int(row * width + col)
let red = Float(pixels[offset].red)
let green = Float(pixels[offset].green)
let blue = Float(pixels[offset].blue)
let alpha = pixels[offset].alpha
let luminance = UInt8(0.2126 * red + 0.7152 * green + 0.0722 * blue)
pixels[offset] = Pixel(red: luminance, green: luminance, blue: luminance, alpha: alpha)
}
}
// return the image
let outputImage = CGBitmapContextCreateImage(context)!
return UIImage(CGImage: outputImage, scale: image.scale, orientation: image.imageOrientation)
}
Where
struct Pixel: Equatable {
private var rgba: UInt32
var red: UInt8 {
return UInt8((rgba >> 24) & 255)
}
var green: UInt8 {
return UInt8((rgba >> 16) & 255)
}
var blue: UInt8 {
return UInt8((rgba >> 8) & 255)
}
var alpha: UInt8 {
return UInt8((rgba >> 0) & 255)
}
init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
rgba = (UInt32(red) << 24) | (UInt32(green) << 16) | (UInt32(blue) << 8) | (UInt32(alpha) << 0)
}
static let bitmapInfo = CGImageAlphaInfo.PremultipliedLast.rawValue | CGBitmapInfo.ByteOrder32Little.rawValue
}
func ==(lhs: Pixel, rhs: Pixel) -> Bool {
return lhs.rgba == rhs.rgba
}
Or, in Swift 3:
func blackAndWhite(image: UIImage) -> UIImage? {
// get information about image
let imageref = image.cgImage!
let width = imageref.width
let height = imageref.height
// create new bitmap context
let bitsPerComponent = 8
let bytesPerPixel = 4
let bytesPerRow = width * bytesPerPixel
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = Pixel.bitmapInfo
let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)!
// draw image to context
let rect = CGRect(x: 0, y: 0, width: CGFloat(width), height: CGFloat(height))
context.draw(imageref, in: rect)
// manipulate binary data
guard let buffer = context.data else {
print("unable to get context data")
return nil
}
let pixels = buffer.bindMemory(to: Pixel.self, capacity: width * height)
for row in 0 ..< height {
for col in 0 ..< width {
let offset = Int(row * width + col)
let red = Float(pixels[offset].red)
let green = Float(pixels[offset].green)
let blue = Float(pixels[offset].blue)
let alpha = pixels[offset].alpha
let luminance = UInt8(0.2126 * red + 0.7152 * green + 0.0722 * blue)
pixels[offset] = Pixel(red: luminance, green: luminance, blue: luminance, alpha: alpha)
}
}
// return the image
let outputImage = context.makeImage()!
return UIImage(cgImage: outputImage, scale: image.scale, orientation: image.imageOrientation)
}
struct Pixel: Equatable {
private var rgba: UInt32
var red: UInt8 {
return UInt8((rgba >> 24) & 255)
}
var green: UInt8 {
return UInt8((rgba >> 16) & 255)
}
var blue: UInt8 {
return UInt8((rgba >> 8) & 255)
}
var alpha: UInt8 {
return UInt8((rgba >> 0) & 255)
}
init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
rgba = (UInt32(red) << 24) | (UInt32(green) << 16) | (UInt32(blue) << 8) | (UInt32(alpha) << 0)
}
static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
static func ==(lhs: Pixel, rhs: Pixel) -> Bool {
return lhs.rgba == rhs.rgba
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With