I need to mirror a UIWebView's CALayers to a smaller CALayer. The smaller CALayer is essentially a pip of the larger UIWebView. I'm having difficulty in doing this. The only thing that comes close is CAReplicatorLayer, but given the original and the copy have to have CAReplicatorLayer as a parent, I cannot split the original and copy on different screens.
An illustration of what I'm trying to do:
The user needs to be able to interact with the smaller CALayer and both need to be in sync.
I've tried doing this with renderInContext and CADisplayLink. Unfortunately there is some lag/stutter because it's trying to re-draw every frame, 60 times a second. I need a way to do the mirroring without re-drawing on each frame, unless something has actually changed. So I need a way of knowing when the CALayer (or child CALayers) become dirty.
I cannot simply have two UIWebView's because two pages may be different (timing is off, different background, etc...). I have no control over the web page being displayed. I also cannot display the entire iPad screen as there are other elements on the screen that should not show on the external screen.
Both the larger CALayer and smaller "pip" CALayer need to match smoothly frame-for-frame in iOS 6. I do not need to support earlier versions.
The solution needs to be app-store passable.
As written in comments, if the main needing is to know WHEN to update the layer (and not How), I move my original answer after the "OLD ANSWER" line and add what discussed in the comments:
First (100% Apple Review Safe ;-)
Second: performance friendly and "theorically" review safe...but not sure :-/
I try to explain how I arrived to this code:
The main goal is to understand when TileLayer (a private subclass of CALayer used by UIWebView) becomes dirty.
The problem is that you can't access it directly. But, you can use method swizzle to change the behavior of the layerSetNeedsDisplay: method in every CALayer and subclasses.
You must be sure to avoid a radical change in the original behavior, and do only the necessary to add a "notification" when the method is called.
When you have successfully detected each layerSetNeedsDisplay: call, the only remaining thing is to understand "which is" the involved CALayer --> if it's the internal UIWebView TileLayer, we trigger an "isDirty" notification.
But we can't iterate through the UIWebView content and find the TileLayer, for example simply using "isKindOfClass:[TileLayer class]" will sure give you a rejection (Apple uses a static analyzer to check the use of private API). What can you do?
Something tricky like...for example...compare the involved layer size (the one that is calling layerSetNeedsDisplay:) with the UIWebView size? ;-)
Moreover, sometimes the UIWebView changes the child TileLayer and use a new one, so you have to do this check more times.
Last thing: layerSetNeedsDisplay: is not always called when you simply scroll the UIWebView (if the layer is already built), so you have to use UIWebViewDelegate to intercept the scrolling / zooming.
You will find that method swizzle it's the reason of rejection in some apps, but it has been always motivated with "you changed the behavior of an object". In this case you don't change the behavior of something, but simply intercept when a method is called. I think that you can give it a try or contact Apple Support to check if it's legal, if you are not sure.
OLD ANSWER
I'm not sure this is performance friendly enough, I tried it only with both view on the same device and it works pretty good... you should try it using Airplay.
The solution is quite simple: you take a "screenshot" of the UIWebView / MKMapView using UIGraphicsGetImageFromCurrentImageContext. You do this 30/60 times a second, and copy the result in an UIImageView (visible on the second display, you can move it wherever you want).
To detect if the view changed and avoid doing traffic on the wireless link, you can compare the two uiimages (the old frame and the new frame) byte by byte, and set the new only if it's different from the previous. (yeah, it works! ;-)
The only thing I didn't manage this evening is to make this comparison fast: if you look at the sample code attached, you'll see that the comparison is really cpu intensive (because it uses UIImagePNGRepresentation() to convert UIImage in NSData) and makes the whole app going so slow. If you don't use the comparison (copying every frame) the app is fast and smooth (at least on my iPhone 5). But I think that there are very much possibility to solve it...for example making the comparison every 4-5 frames, or optimizing the NSData creation in background
I attach a sample project: http://www.lombax.it/documents/ImageMirror.zip
In the project the frame comparison is disabled (an if commented) I attach the code here for future reference:
// here you start a timer, 50fps // the timer is started on a background thread to avoid blocking it when you scroll the webview - (IBAction)enableMirror:(id)sender { dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul); //0ul --> unsigned long dispatch_async(queue, ^{ // 0.04f --> 25 fps NSTimer __unused *timer = [NSTimer scheduledTimerWithTimeInterval:0.02f target:self selector:@selector(copyImageIfNeeded) userInfo:nil repeats:YES]; // need to start a run loop otherwise the thread stops CFRunLoopRun(); }); } // this method create an UIImage with the content of the given view - (UIImage *) imageWithView:(UIView *)view { UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0); [view.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *img = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return img; } // the method called by the timer -(void)copyImageIfNeeded { // this method is called from a background thread, so the code before the dispatch is executed in background UIImage *newImage = [self imageWithView:self.webView]; // the copy is made only if the two images are really different (compared byte to byte) // this comparison method is cpu intensive // UNCOMMENT THE IF AND THE {} to enable the frame comparison //if (!([self image:self.mirrorView.image isEqualTo:newImage])) //{ // this must be called on the main queue because it updates the user interface dispatch_queue_t queue = dispatch_get_main_queue(); dispatch_async(queue, ^{ self.mirrorView.image = newImage; }); //} } // method to compare the two images - not performance friendly // it can be optimized, because you can "store" the old image and avoid // converting it more and more...until it's changed // you can even try to generate the nsdata in background when the frame // is created? - (BOOL)image:(UIImage *)image1 isEqualTo:(UIImage *)image2 { NSData *data1 = UIImagePNGRepresentation(image1); NSData *data2 = UIImagePNGRepresentation(image2); return [data1 isEqual:data2]; }
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With