I've created what I believe should be a proper grayscale CIFilter using a CIColorMonochromeFilter tuned for white. This looks pretty grayscale to me, but I'm not an expert either so I'm just wondering if this is an accurate implementation of a grayscale filter.
I haven't written code to dump and test the pixels, but based on the format of the image already being RBG, I'm curious as to what format the output image would be. For instance, how I would go about knowing if each pixel is one component or three components?
I know CIImage is just a recipe for images and that I must use a CGImageRef to eventually render to. I just don't understand what pixel format I should expect the data to be.
CIFilter * monoChromeFilter = [CIFilter filterWithName:kCIColorMonochromeFilterName];
[monoChromeFilter setValue:self.inputImage forKey:kCIInputImageKey];
CIColor * white = [CIColor colorWithRed:1.0 green:1.0 blue:1.0];
[monoChromeFilter setValue:white forKey:kCIInputColorKey];
I hope this could help to you (it's swift.)
static func convertToBlackAndWhite(image:UIImage) -> UIImage?
{
//first do color controls
let ciImage = CIImage(image:image)
let filter = CIFilter(name: "CIColorControls")
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setValue(0.0, forKey: kCIInputBrightnessKey)
filter.setValue(0.0, forKey: kCIInputSaturationKey)
filter.setValue(1.1, forKey: kCIInputContrastKey)
let intermediateImage = filter.outputImage
let filter1 = CIFilter(name:"CIExposureAdjust")
filter1.setValue(intermediateImage, forKey: kCIInputImageKey)
filter1.setValue(0.7, forKey: kCIInputEVKey)
let output = filter1.outputImage
let context = CIContext(options: nil)
let cgImage = context.createCGImage(output, fromRect: output.extent())
return UIImage(CGImage: cgImage, scale: image.scale, orientation: image.imageOrientation)
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With