new to the video processing and am stuck here for a few days.
I have a CVPixelBufferRef that is in YUV (YCbCr 4:2:0) format. I grab the base address using CVPixelBufferGetBaseAddress.
How do I take the bytes at the base address and create a new CVPixelBufferRef, one that is also in the same YUV format?
I tried:
CVPixelBufferCreateWithBytes(CFAllocatorGetDefault(), 1440, 900, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, currentFrame, 2208, NULL, NULL, (pixelBufferAttributes), &imageBuffer);
Which creates a CVPixelBufferRef, but I can't do anything with it (i.e. convert it to a CIImage, render it, etc.).
Ultimately, my goal is to take the bytes I receive that are from the base address call and to just display them on the screen. I know I can do that directly without the base address call, but I have a limitation that only allows me to receive the base address bytes.
For reference,
The reason I could not get a CIImage from the CVPixelBuffer is because it is not IOSurface backed. To ensure it is IOSurface backed, use CVPixelBufferCreate and then CVPixelBufferGetBaseAddress (or CVPixelBufferGetBaseAddressOfPlane if planar data) and memcpy your bytes into that address.
Hope this helps someone in the future.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With