What APIs do I need to use, and what precautions do I need to take, when writing to an IOSurface
in an XPC process that is also being used as the backing store for an MTLTexture
in the main application?
In my XPC service I have the following:
IOSurface *surface = ...;
CIRenderDestination *renderDestination = [... initWithIOSurface:surface];
// Send the IOSurface to the client using an NSXPCConnection.
// In the service, periodically write to the IOSurface.
In my application I have the following:
IOSurface *surface = // ... fetch IOSurface from NSXPConnection.
id<MTLTexture> texture = [device newTextureWithDescriptor:... iosurface:surface];
// The texture is used in a fragment shader (Read-only)
I have an MTKView
that is running it's normal update loop. I want my XPC service to be able to periodically write to the IOSurface
using Core Image and then have the new contents rendered by Metal on the app side.
What synchronization is needed to ensure this is done properly? A double or triple buffering strategy is one, but that doesn't really work for me because I might not have enough memory to allocate 2x or 3x the number of surfaces. (The example above uses one surface for clarity, but in reality I might have dozens of surfaces I'm drawing to. Each surface represents a tile of an image. An image can be as large as JPG/TIFF/etc allows.)
WWDC 2010-442 talks about IOSurface
and briefly mentions that it all "just works", but that's in the context of OpenGL and doesn't mention Core Image or Metal.
I originally assumed that Core Image and/or Metal would be calling IOSurfaceLock()
and IOSurfaceUnlock()
to protect read/write access, but that doesn't appear to be the case at all. (And the comments in the header file for IOSurfaceRef.h
suggest that the locking is only for CPU access.)
Can I really just let Core Image's CIRenderDestination
write at-will to the IOSurface
while I read from the corresponding MTLTexture
in my application's update loop? If so, then how is that possible if, as the WWDC video states, all textures bound to an IOSurface
share the same video memory? Surely I'd get some tearing of the surface's content if reading and writing occurred during the same pass.
The thing you need to do is ensure that the CoreImage drawing has completed in the XPC before the IOSurface
is used to draw in the application. If you were using either OpenGL or Metal on both sides, you would either call glFlush()
or [-MTLRenderCommandEncoder waitUntilScheduled]
. I would assume that something in CoreImage is making one of those calls.
I can say that it will likely be obvious if that's not happening because you will get tearing or images that are half new rendering and half old rendering if things aren't properly synchronized. I've seen that happen when using IOSurface
s across XPCs.
One thing you can do is put some symbolic breakpoints on -waitUntilScheduled
and -waitUntilCompleted
and see if CI is calling them in your XPC (assuming the documentation doesn't explicitly tell you). There are other synchronization primitives in Metal, but I'm not very familiar with them. They may be useful as well. (It's my understanding that CI is all Metal under the hood now.)
Also, the IOSurface
object has methods -incrementUseCount
, -decrementUseCount
, and -localUseCount
. It might be worth checking those to see if CI sets them appropriately. (See <IOSurface/IOSurfaceObjC.h>
for details.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With