I need help writing an objective-c function that will take in an array of UIImages/PNGs, and return/save one tall image of all the images stitched together in order vertically. I am new to this so please go slow and take it easy :)
My ideas so far: Draw a UIView, then addSubviews to each one's parent (of the images) and then ???
Swift 4
extension Array where Element: UIImage {
func stitchImages(isVertical: Bool) -> UIImage {
let maxWidth = self.compactMap { $0.size.width }.max()
let maxHeight = self.compactMap { $0.size.height }.max()
let maxSize = CGSize(width: maxWidth ?? 0, height: maxHeight ?? 0)
let totalSize = isVertical ?
CGSize(width: maxSize.width, height: maxSize.height * (CGFloat)(self.count))
: CGSize(width: maxSize.width * (CGFloat)(self.count), height: maxSize.height)
let renderer = UIGraphicsImageRenderer(size: totalSize)
return renderer.image { (context) in
for (index, image) in self.enumerated() {
let rect = AVMakeRect(aspectRatio: image.size, insideRect: isVertical ?
CGRect(x: 0, y: maxSize.height * CGFloat(index), width: maxSize.width, height: maxSize.height) :
CGRect(x: maxSize.width * CGFloat(index), y: 0, width: maxSize.width, height: maxSize.height))
image.draw(in: rect)
}
}
}
}
Below are Swift 3 and Swift 2 examples that stitch images together vertically or horizontally. They use the dimensions of the largest image in the array provided by the caller to determine the common size used for each individual frame each individual image is stitched into.
Note: The Swift 3 example preserves each image's aspect ratio, while the Swift 2 example does not. See note inline below regarding that.
UPDATE: Added Swift 3 example
Swift 3:
import UIKit
import AVFoundation
func stitchImages(images: [UIImage], isVertical: Bool) -> UIImage {
var stitchedImages : UIImage!
if images.count > 0 {
var maxWidth = CGFloat(0), maxHeight = CGFloat(0)
for image in images {
if image.size.width > maxWidth {
maxWidth = image.size.width
}
if image.size.height > maxHeight {
maxHeight = image.size.height
}
}
var totalSize : CGSize
let maxSize = CGSize(width: maxWidth, height: maxHeight)
if isVertical {
totalSize = CGSize(width: maxSize.width, height: maxSize.height * (CGFloat)(images.count))
} else {
totalSize = CGSize(width: maxSize.width * (CGFloat)(images.count), height: maxSize.height)
}
UIGraphicsBeginImageContext(totalSize)
for image in images {
let offset = (CGFloat)(images.index(of: image)!)
let rect = AVMakeRect(aspectRatio: image.size, insideRect: isVertical ?
CGRect(x: 0, y: maxSize.height * offset, width: maxSize.width, height: maxSize.height) :
CGRect(x: maxSize.width * offset, y: 0, width: maxSize.width, height: maxSize.height))
image.draw(in: rect)
}
stitchedImages = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return stitchedImages
}
Note: The original Swift 2 example below does not preserve the aspect ratio (e.g. in the Swift 2 example all images are expanded to fit in the bounding box that represents the extrema of the widths and heights of the images, thus Any non-square image can be stretched disproportionately in one of its dimensions). If you're using Swift 2 and want to preserve the aspect ratio please use the
AVMakeRect()
modification from the Swift 3 example. Since I no longer have access to a Swift 2 playground and can't test it to ensure no errors I'm not updating the Swift 2 example here for that.
Swift 2: (Doesn't preserve aspect ratio. Fixed in Swift 3 example above)
import UIKit
import AVFoundation
func stitchImages(images: [UIImage], isVertical: Bool) -> UIImage {
var stitchedImages : UIImage!
if images.count > 0 {
var maxWidth = CGFloat(0), maxHeight = CGFloat(0)
for image in images {
if image.size.width > maxWidth {
maxWidth = image.size.width
}
if image.size.height > maxHeight {
maxHeight = image.size.height
}
}
var totalSize : CGSize, maxSize = CGSizeMake(maxWidth, maxHeight)
if isVertical {
totalSize = CGSizeMake(maxSize.width, maxSize.height * (CGFloat)(images.count))
} else {
totalSize = CGSizeMake(maxSize.width * (CGFloat)(images.count), maxSize.height)
}
UIGraphicsBeginImageContext(totalSize)
for image in images {
var rect : CGRect, offset = (CGFloat)((images as NSArray).indexOfObject(image))
if isVertical {
rect = CGRectMake(0, maxSize.height * offset, maxSize.width, maxSize.height)
} else {
rect = CGRectMake(maxSize.width * offset, 0 , maxSize.width, maxSize.height)
}
image.drawInRect(rect)
}
stitchedImages = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return stitchedImages
}
The normal way is to create a bitmap image context, draw your images in at the required position, and then get the image from the image context.
You can do this with UIKit, which is somewhat easier, but isn't thread safe, so will need to run in the main thread and will block the UI.
There is loads of example code around for this, but if you want to understand it properly, you should look at UIGraphicsContextBeginImageContext, UIGraphicsGetCurrentContext, UIGraphicsGetImageFromCurrentImageContext and UIImageDrawInRect. Don't forget UIGraphicsPopCurrentContext.
You can also do this with Core Graphics, which is AFAIK, safe to use on background threads ( I've not had a crash from it yet). Efficiency is about the same, as UIKit just uses CG under the hood. Key words for this are CGBitmapContextCreate, CGContextDrawImage, CGBitmapContextCreateImage, CGContextTranslateCTM, CGContextScaleCTM and CGContextRelease ( no ARC for Core Graphics). The scaling and translating is because CG has the origin in the bottom right hand corner and Y inscreases upwards.
There is also a third way, which is to use CG for the context, but save yourself all the co-ordinate pain by using a CALayer, set your CGImage ( UIImage.CGImage) as the contents and then render the layer to the context. This is still thread safe and lets the layer take care of all the transformations. Keywords for this is - renderInContext:
I know I'm a bit late here but hopefully this can help someone out. If you're trying to create one large image out of an array you can use this method
- (UIImage *)mergeImagesFromArray: (NSArray *)imageArray {
if ([imageArray count] == 0) return nil;
UIImage *exampleImage = [imageArray firstObject];
CGSize imageSize = exampleImage.size;
CGSize finalSize = CGSizeMake(imageSize.width, imageSize.height * [imageArray count]);
UIGraphicsBeginImageContext(finalSize);
for (UIImage *image in imageArray) {
[image drawInRect: CGRectMake(0, imageSize.height * [imageArray indexOfObject: image],
imageSize.width, imageSize.height)];
}
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
Try this piece of code,I tried stitching two images together n displayed them in an ImageView
UIImage *bottomImage = [UIImage imageNamed:@"bottom.png"]; //first image
UIImage *image = [UIImage imageNamed:@"top.png"]; //foreground image
CGSize newSize = CGSizeMake(209, 260); //size of image view
UIGraphicsBeginImageContext( newSize );
// drawing 1st image
[bottomImage drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height/2)];
// drawing the 2nd image after the 1st
[image drawInRect:CGRectMake(0,newSize.height/2,newSize.width/2,newSize.height/2)] ;
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
join.image = newImage;
join is the name of the imageview and you get to see the images as a single image.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With