I'm using glReadPixels
to read data into a CVPixelBufferRef
. I use the CVPixelBufferRef
as the input into an AVAssetWriter
. Unfortunately the pixel formats seem to be mismatched.
I think glReadPixels
is returning pixel data in RGBA format while AVAssetWriter
wants pixel data in ARGB format. What's the best way to convert RGBA to ARGB?
Here's what I've tried so far:
CGImageRef
as an intermediate stepThe bit manipulation didn't work because CVPixelBufferRef doesn't seem to support subscripts. The CGImageRef
intermediate step does work... but I'd prefer not to have 50 extra lines of code that could potentially be a performance hit.
Better than using the CPU to swap the components would be to write a simple fragment shader to efficiently do it on the GPU as you render the image.
And the best bestest way is to completely remove the copying stage by using an iOS5 CoreVideo CVOpenGLESTextureCache which allows you to render straight to the CVPixelBufferRef, eliminating the call to glReadPixels.
p.s. I'm pretty sure AVAssetWriter wants data in BGRA format (actually it probably wants it in yuv, but that's another story).
UPDATE: as for links, the doco seems to still be under NDA, but there are two pieces of freely downloadable example code available:
GLCameraRipple and RosyWriter
The header files themselves contain good documentation, and the mac equivalent is very similar (CVOpenGLTextureCache), so you should have plenty to get you started.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With