Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Trying to dynamically color transparent UIImages but keep getting a blurry result. What am I doing wrong?

My iPhone app features custom UITableViewCells that each feature an icon. In the normal cell state, these icons are black with a transparent background. Instead of bundling a second set of inverted icons with the app for a highlighted state (white on a transparent background) I'd like to invert these icons on-the-fly using Core Graphics whenever the user touches the corresponding table cell.

I've found some other answers related to overlaying a UIImage with a color, or re-coloring UIImages, but all of these techniques produce a blurry result for me (see below). I've tried all kinds of CGBlendModes, as well as manually computing a more accurate mask (perhaps I did it incorrectly), but it seems that the semi-transparent pixels around the edges of my icons are getting their opacity borked or are basically being dropped - giving a choppy/blurred appearance. I'm at a loss for what I'm doing wrong.

It's also not really an option to change all my icons so that are just pure black/white with no transparency - I need the icons to sit on a transparent background so that they can be overlaid on top of other UI elements as well.

The code (courtesy of Chadwick Wood) that I'm using to invert the icon (I'm calling this method on each of my original icons and passing in [UIColor whiteColor] as the second argument) and example output (on an iPhone 4 with iOS 4.1) is below (ignore the blue background on the highlighted image - it is the highlighted background of a selected table cell).

Any help is greatly appreciated.

Example input & output:

Icon before & after programmatic recoloring.

@implementation UIImage(FFExtensions)

+ (UIImage *)imageNamed:(NSString *)name withColor:(UIColor *)color {

 // load the image
 UIImage *img = [UIImage imageNamed:name];

 // begin a new image context, to draw our colored image onto
 UIGraphicsBeginImageContext(img.size);

 // get a reference to that context we created
 CGContextRef context = UIGraphicsGetCurrentContext();
 CGContextSetInterpolationQuality(context, kCGInterpolationHigh);

 // set the fill color
 [color setFill];

 // translate/flip the graphics context (for transforming from CG* coords to UI* coords
 CGContextTranslateCTM(context, 0, img.size.height);
 CGContextScaleCTM(context, 1.0, -1.0);

 // set the blend mode to color burn, and the original image
 CGContextSetBlendMode(context, kCGBlendModeMultiply);
 CGRect rect = CGRectMake(0, 0, img.size.width, img.size.height);
 //CGContextDrawImage(context, rect, img.CGImage);

 // set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
 CGContextClipToMask(context, rect, img.CGImage);
 CGContextAddRect(context, rect);
 CGContextDrawPath(context,kCGPathFill);


 // generate a new UIImage from the graphics context we drew onto
 UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
 UIGraphicsEndImageContext();

 //return the color-burned image
 return coloredImg;
}

@end
like image 767
Jon G. Avatar asked Nov 09 '10 20:11

Jon G.


1 Answers

Thanks to Peter & Steven's observation that the resolution of the output image seemed to be lower than the input, I realized that I was not accounting for the scale factor of the screen when creating my image context.

Changing the line:

UIGraphicsBeginImageContext(img.size);

to

UIGraphicsBeginImageContextWithOptions(img.size, NO, [UIScreen mainScreen].scale);

fixes the problem.

like image 83
Jon G. Avatar answered Sep 28 '22 05:09

Jon G.