Anybody know how to create subtract one UIImage from another UIImage
for example as this screen:
Thanks for response!
Image subtraction or pixel subtraction is a process whereby the digital numeric value of one pixel or whole image is subtracted from another image. This is primarily done for one of two reasons – levelling uneven sections of an image such as half an image having a shadow on it, or detecting changes between two images.
Loading/Adding/Subtracting/Intersecting Layers As Selections To add a layer to an already existing selection, press Ctrl Shift (Mac: Command Shift) and click on the layer's thumbnail. To subtract a layer from a selection, press Ctrl Alt (Mac: Command Option) and click on the layer's thumbnail.
Description. example. Z = imsubtract( X , Y ) subtracts each element in array Y from the corresponding element in array X and returns the difference in the corresponding element of the output array Z .
I believe you can accomplish this by using the kCGBlendModeDestinationOut
blend mode. Create a new context, draw your background image, then draw the foreground image with this blend mode.
UIGraphicsBeginImageContextWithOptions(sourceImage.size, NO, sourceImage.scale)
[sourceImage drawAtPoint:CGPointZero];
[maskImage drawAtPoint:CGPointZero blendMode:kCGBlendModeDestinationOut alpha:1.0f];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
what does it mean to subtract an image? the sample image given shows more of a !red operation. let us say that to subtract image a from image b means to set every pixel in b that intersects a pixel in a to transparent. to perform the subtraction, what we are actually doing is masking image b to the inverse of image a. so, a good approach would be to create an image mask from the alpha channel of image a, then apply it to b. to create the mask you would do something like this:
// get access to the image bytes
CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
// create a buffer to hold the mask values
size_t width = CGImageGetWidth(image.CGImage);
size_t height = CGImageGetHeight(image.CGImage);
uint8_t *maskData = malloc(width * height);
// iterate over the pixel data, reading the alpha value
uint8_t *alpha = (uint8_t *)CFDataGetBytePtr(pixelData) + 3;
uint8_t *mask = maskData;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
*mask = *alpha;
mask++;
alpha += 4; // skip to the next pixel
}
}
// create the mask image from the buffer
CGDataProviderRef maskProvider = CGDataProviderCreateWithData(NULL, maskData, width * height, NULL);
CGImageRef maskImage = CGImageMaskCreate(width, height, 8, 8, width, maskProvider, NULL, false);
// cleanup
CFRelease(pixelData);
CFRelease(maskProvider);
free(maskData);
whew. then, to mask image b, all you have to do is:
CGImageRef subtractedImage = CGImageCreateWithMask(b.CGImage, maskImage);
hey presto.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With