Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

AVFoundation: Fit Video to CALayer correctly when exporting

Problem:

I'm having issues getting videos I'm creating with AVFoundation to show in the VideoLayer, a CALayer, with correct dimensions.

Example:

Here is what the video should look like (as its displayed to the user in the app)

enter image description here

However, here's the resulting video when it's exported:

enter image description here

Details

As you can see, its meant to be a square video, with green background, with the video fitting to a specified frame. However, the resulting video doesn't fit the CALayer used to contain it (see the black space where the video should be stretched to?).

Sometimes the video does fill the layer but is stretched beyond the bounds (either too much width or too much height) and often doesn't maintain the natural aspect radio of the video.

Code

CGRect displayedFrame = [self adjustedVideoBoundsFromVideo:gifVideo];//the cropped frame
CGRect renderFrame = [self renderSizeForGifVideo:gifVideo]; //the full rendersize
AVAsset * originalAsset = self.videoAsset;

AVAssetTrack * videoTrack = [[originalAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableComposition * mainComposition = [AVMutableComposition composition];

AVMutableCompositionTrack * compositionTrack = [mainComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];

[compositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, originalAsset.duration) ofTrack:videoTrack atTime:kCMTimeZero error:nil];

CALayer * parentLayer = [CALayer layer];
CALayer * backgroundLayer = [CALayer layer];
CALayer * videoLayer = [CALayer layer];
parentLayer.frame = renderFrame;
backgroundLayer.frame = parentLayer.bounds;
backgroundLayer.backgroundColor = self.backgroundColor.CGColor;
videoLayer.frame = displayedFrame;
[parentLayer addSublayer:backgroundLayer];
[parentLayer addSublayer:videoLayer];


AVMutableVideoComposition * videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.renderSize = CGSizeMake(renderFrame.size.width, renderFrame.size.height);




videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool
                         videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];



AVMutableVideoCompositionInstruction * instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, mainComposition.duration);

AVMutableVideoCompositionLayerInstruction * layerInstruction = [AVMutableVideoCompositionLayerInstruction
                                                                videoCompositionLayerInstructionWithAssetTrack:videoTrack];


instruction.layerInstructions = @[layerInstruction];
videoComposition.instructions = @[instruction];



NSString* videoName = @"myNewGifVideo.mp4";

NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:videoName];
NSURL    *exportUrl = [NSURL fileURLWithPath:exportPath];
if ([[NSFileManager defaultManager] fileExistsAtPath:exportPath])
{
    [[NSFileManager defaultManager] removeItemAtPath:exportPath error:nil];
}

AVAssetExportSession * exporter = [[AVAssetExportSession alloc] initWithAsset:mainComposition presetName:AVAssetExportPresetHighestQuality];
exporter.videoComposition = videoComposition;
exporter.outputFileType = AVFileTypeMPEG4;
exporter.outputURL = exportUrl;

[exporter exportAsynchronouslyWithCompletionHandler:^
 {
     dispatch_async(dispatch_get_main_queue(), ^{
         self.finalVideo = exportUrl;
         [self.delegate shareManager:self didCreateVideo:self.finalVideo];
         if (completionBlock){
             completionBlock();
         }
     });
 }];

What I've tried:

I tried adjusting the videoLayer's frame, bounds, and contentGravity which did nothing of use.

I tried adding a transform to the AVMutableVideoCompositionLayerInstruction to scale the video to the size of the displayRect (many different videos can be chosen from, and their width and height are variable. Each video shows differently in the resulting video, none of them correctly) Transforming would sometimes get one dimension right (usually the width), but mess up the other one. And it would never get one dimension consistently right if I cropped/scaled the video in a slightly different way.

I've tried changing the renderSize of the videoComposition but that ruins the square crop.

I can't seem to get it right. How can I get the video to perfectly fill the videoLayer with the displayedFrame frame (final note: the naturalSize of the video differs from the displayedFrame which is why I tried transforming it)?

like image 222
Rob Caraway Avatar asked Dec 20 '22 00:12

Rob Caraway


1 Answers

When the video is rendered in your videoLayer, it has an implicit transform t applied (we don't have access to it, but that's some initial transform that the render tool internally applies to the video). To make the video exactly fill that layer on export, we have to understand where that initial transform comes from. t shows a strange behavior: It is dependent on the renderSize of your video composition (In your example that would be a square). You can see that if you set the renderSize to anything else, the scale and aspect ratio of the video rendered in videoLayer changes too - even if you didn't change the videoLayer's frame at all. I don't see how this behavior would make sense (the frame of the composition and the frame of the video layer that's part of the composition should be completely independent), so I think it's a bug in AVVideoCompositionCoreAnimationTool.

To correct the ominous behavior of t, apply the following transform to your videoInstruction:

let bugFixTransform = CGAffineTransform(scaleX: renderSize.width/videoTrack.naturalSize.width,
                                        y: renderSize.height/videoTrack.naturalSize.height)
videoLayerInstruction.setTransform(bugFixTransform, at: .zero)

The video will then exactly fill videoLayer.

If the video doesn't have the standard orientation, two more transforms have to be applied to fix the orientation and scale:

let orientationAspectTransform: CGAffineTransform
let sourceVideoIsRotated: Bool = videoTrack.preferredTransform.a == 0
if sourceVideoIsRotated {
  orientationAspectTransform = CGAffineTransform(scaleX: videoTrack.naturalSize.width/videoTrack.naturalSize.height,
                                                 y: videoTrack.naturalSize.height/videoTrack.naturalSize.width)
} else {
  orientationAspectTransform = .identity
}

let bugFixTransform = CGAffineTransform(scaleX: compositionSize.width/videoTrack.naturalSize.width,
                                        y: compositionSize.height/videoTrack.naturalSize.height)
let transform =
  videoTrack.preferredTransform
    .concatenating(bugFixTransform)
    .concatenating(orientationAspectTransform)
videoLayerInstruction.setTransform(transform, at: .zero)

Update: Just for the case that you're using preferredTransform, note that that one is also broken (including iOS 14), and can cause unexpected transforms in some situations, see this question for a description and workaround.

like image 197
Theo Avatar answered Dec 24 '22 00:12

Theo