Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I get the RGB Value of a pixel using CGContext?

I'm trying to edit images by changing the pixels.

I have the following code:

    let imageRect = CGRectMake(0, 0, self.image.image!.size.width, self.image.image!.size.height)

    UIGraphicsBeginImageContext(self.image.image!.size)
    let context = UIGraphicsGetCurrentContext()

    CGContextSaveGState(context)
    CGContextDrawImage(context, imageRect, self.image.image!.CGImage)

    for x in 0...Int(self.image.image!.size.width) {
        for y in 0...Int(self.image.image!.size.height) {
            var red = 0
            if y % 2 == 0 {
                red = 255
            }

            CGContextSetRGBFillColor(context, CGFloat(red/255), 0.5, 0.5, 1)
            CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
        }
    }
    CGContextRestoreGState(context)
    self.image.image = UIGraphicsGetImageFromCurrentImageContext()

I'm looping through all the pixels and changing the value of the each pixel, then converting it back to an image. Want I want to do is somehow get the value of the current pixel (in the y-for-loop) and do something with that data. I have not found anything on the internet about this particular problem.

like image 323
SemAllush Avatar asked Mar 15 '23 10:03

SemAllush


2 Answers

Under the covers, UIGraphicsBeginImageContext creates a CGBitmapContext. You can get access to the context's pixel storage using CGBitmapContextGetData. The problem with this approach is that the UIGraphicsBeginImageContext function chooses the byte order and color space used to store the pixel data. Those choices (particularly the byte order) could change in future versions of iOS (or even on different devices).

So instead, let's create the context directly with CGBitmapContextCreate, so we can be sure of the byte order and color space.

In my playground, I've added a test image named [email protected].

import XCPlayground
import UIKit

let image = UIImage(named: "pic.jpeg")!
XCPCaptureValue("image", value: image)

Here's how we create the bitmap context, taking the image scale into account (which you didn't do in your question):

let rowCount = Int(image.size.height * image.scale)
let columnCount = Int(image.size.width * image.scale)
let stride = 64 * ((columnCount * 4 + 63) / 64)
let context = CGBitmapContextCreate(nil, columnCount, rowCount, 8, stride,
    CGColorSpaceCreateDeviceRGB(),
    CGBitmapInfo.ByteOrder32Little.rawValue |
    CGImageAlphaInfo.PremultipliedLast.rawValue)

Next, we adjust the coordinate system to match what UIGraphicsBeginImageContextWithOptions would do, so that we can draw the image correctly and easily:

CGContextTranslateCTM(context, 0, CGFloat(rowCount))
CGContextScaleCTM(context, image.scale, -image.scale)

UIGraphicsPushContext(context!)
image.drawAtPoint(CGPointZero)
UIGraphicsPopContext()

Note that UIImage.drawAtPoint takes image.orientation into account. CGContextDrawImage does not.

Now let's get a pointer to the raw pixel data from the context. The code is clearer if we define a structure to access the individual components of each pixel:

struct Pixel {
    var a: UInt8
    var b: UInt8
    var g: UInt8
    var r: UInt8
}

let pixels = UnsafeMutablePointer<Pixel>(CGBitmapContextGetData(context))

Note that the order of the Pixel members is defined to match the specific bits I set in the bitmapInfo argument to CGBitmapContextCreate.

Now we can loop over the pixels. Note that we use rowCount and columnCount, computed above, to visit all the pixels, regardless of the image scale:

for y in 0 ..< rowCount {
    if y % 2 == 0 {
        for x in 0 ..< columnCount {
            let pixel = pixels.advancedBy(y * stride / sizeof(Pixel.self) + x)
            pixel.memory.r = 255
        }
    }
}

Finally, we get a new image from the context:

let newImage = UIImage(CGImage: CGBitmapContextCreateImage(context)!, scale: image.scale, orientation: UIImageOrientation.Up)

XCPCaptureValue("newImage", value: newImage)

The result, in my playground's timeline:

timeline screenshot

Finally, note that if your images are large, going through pixel by pixel can be slow. If you can find a way to perform your image manipulation using Core Image or GPUImage, it'll be a lot faster. Failing that, using Objective-C and manually vectorizing it (using NEON intrinsics) may provide a big boost.

like image 102
rob mayoff Avatar answered Mar 31 '23 22:03

rob mayoff


Ok, I think I have a solution that should work for you in Swift 2.

Credit goes to this answer for the UIColor extension below.

Since I needed an image to test this on I chose a slice (50 x 50 - top left corner) of your gravatar...

So the code below converts this:
enter image description here

To this:
enter image description here

This works for me in a playground - all you should have to do is copy and paste into a playground to see the result:

//: Playground - noun: a place where people can play

import UIKit
import XCPlayground

extension CALayer {

    func colorOfPoint(point:CGPoint) -> UIColor
    {
        var pixel:[CUnsignedChar] = [0,0,0,0]

        let colorSpace = CGColorSpaceCreateDeviceRGB()
        let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue)

        let context = CGBitmapContextCreate(&pixel, 1, 1, 8, 4, colorSpace,bitmapInfo.rawValue)

        CGContextTranslateCTM(context, -point.x, -point.y)

        self.renderInContext(context!)

        let red:CGFloat = CGFloat(pixel[0])/255.0
        let green:CGFloat = CGFloat(pixel[1])/255.0
        let blue:CGFloat = CGFloat(pixel[2])/255.0
        let alpha:CGFloat = CGFloat(pixel[3])/255.0

        //println("point color - red:\(red) green:\(green) blue:\(blue)")

        let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)

        return color
    }
}

extension UIColor {
    var components:(red: CGFloat, green: CGFloat, blue: CGFloat, alpha: CGFloat) {
        var r:CGFloat = 0
        var g:CGFloat = 0
        var b:CGFloat = 0
        var a:CGFloat = 0
        getRed(&r, green: &g, blue: &b, alpha: &a)
        return (r,g,b,a)
    }
}


//get an image we can work on
var imageFromURL = UIImage(data: NSData(contentsOfURL: NSURL(string:"https://www.gravatar.com/avatar/ba4178644a33a51e928ffd820269347c?s=328&d=identicon&r=PG&f=1")!)!)
//only use a small area of that image - 50 x 50 square
let imageSliceArea = CGRectMake(0, 0, 50, 50);
let imageSlice  = CGImageCreateWithImageInRect(imageFromURL?.CGImage, imageSliceArea);
//we'll work on this image
var image = UIImage(CGImage: imageSlice!)


let imageView = UIImageView(image: image)
//test out the extension above on the point (0,0) - returns r 0.541 g 0.78 b 0.227 a 1.0
var pointColor = imageView.layer.colorOfPoint(CGPoint(x: 0, y: 0))



let imageRect = CGRectMake(0, 0, image.size.width, image.size.height)

UIGraphicsBeginImageContext(image.size)
let context = UIGraphicsGetCurrentContext()

CGContextSaveGState(context)
CGContextDrawImage(context, imageRect, image.CGImage)

for x in 0...Int(image.size.width) {
    for y in 0...Int(image.size.height) {
        var pointColor = imageView.layer.colorOfPoint(CGPoint(x: x, y: y))
        //I used my own creativity here - change this to whatever logic you want
        if y % 2 == 0 {
            CGContextSetRGBFillColor(context, pointColor.components.red , 0.5, 0.5, 1)
        }
        else {
            CGContextSetRGBFillColor(context, 255, 0.5, 0.5, 1)
        }

        CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
    }
}
CGContextRestoreGState(context)
image = UIGraphicsGetImageFromCurrentImageContext()

I hope this works for you. I had fun playing around with this!

like image 33
ProgrammierTier Avatar answered Apr 01 '23 00:04

ProgrammierTier