I am using the following code to perform some manipulations on the image that I loaded, but I find that the display becomes blurry when it is on the retina display
- (UIImage*)createImageSection:(UIImage*)image section:(CGRect)section
{
float originalWidth = image.size.width ;
float originalHeight = image.size.height ;
int w = originalWidth * section.size.width;
int h = originalHeight * section.size.height;
CGContextRef ctx = CGBitmapContextCreate(nil, w, h, 8, w * 8,CGImageGetColorSpace([image CGImage]), kCGImageAlphaPremultipliedLast);
CGContextClearRect(ctx, CGRectMake(0, 0, originalWidth * section.size.width, originalHeight * section.size.height)); // w + h before
CGContextTranslateCTM(ctx, (float)-originalWidth * section.origin.x, (float)-originalHeight * section.origin.y);
CGContextDrawImage(ctx, CGRectMake(0, 0, originalWidth, originalHeight), [image CGImage]);
CGImageRef cgImage = CGBitmapContextCreateImage(ctx);
UIImage* resultImage = [[UIImage alloc] initWithCGImage:cgImage];
CGContextRelease(ctx);
CGImageRelease(cgImage);
return resultImage;
}
How do I change this to make it retina compatible.
Thanks in advance for your help
Best, DV
CGImages don't take into account the Retina-ness of your device, so you have to do so yourself.
To do that, you need to multiply all of your coordinates and sizes that use in CoreGraphics routines by the input image's scale
property (which will be 2.0 on Retina devices), to ensure you do all your manipulation at double resolution.
Then you need to change the initialization of resultImage
to use initWithCGImage:scale:orientation:
and input the same scale factor. This is what makes Retina devices render the output at native resolution rather than pixel-doubled resolution.
Thanks for the answer, helped me! Anyway to be clearer, the code by OP should be changed like this:
- (UIImage*)createImageSection:(UIImage*)image section:(CGRect)section
{
CGFloat scale = [[UIScreen mainScreen] scale]; ////// ADD
float originalWidth = image.size.width ;
float originalHeight = image.size.height ;
int w = originalWidth * section.size.width *scale; ////// CHANGE
int h = originalHeight * section.size.height *scale; ////// CHANGE
CGContextRef ctx = CGBitmapContextCreate(nil, w, h, 8, w * 8,CGImageGetColorSpace([image CGImage]), kCGImageAlphaPremultipliedLast);
CGContextClearRect(ctx, CGRectMake(0, 0, originalWidth * section.size.width, originalHeight * section.size.height)); // w + h before
CGContextTranslateCTM(ctx, (float)-originalWidth * section.origin.x, (float)-originalHeight * section.origin.y);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), scale, scale); ////// ADD
CGContextDrawImage(ctx, CGRectMake(0, 0, originalWidth, originalHeight), [image CGImage]);
CGImageRef cgImage = CGBitmapContextCreateImage(ctx);
UIImage* resultImage = [[UIImage alloc] initWithCGImage:cgImage];
CGContextRelease(ctx);
CGImageRelease(cgImage);
return resultImage;
}
Swift version :
let scale = UIScreen.mainScreen().scale
CGContextScaleCTM(UIGraphicsGetCurrentContext(), scale, scale)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With