I am using GPUImage to process incoming video and I would like to then consider a given square subregion of the image and determine the average pixel value of the pixels in that region. Can anyone advise me on how to accomplish this? Even information on how to acquire pixel data for a pixel at coordinate(x,y) in the image would be useful.
Apologies if this is a simple question, but I am new to computer vision and the way to do this was not clear to me from the available documentation. Thank you.
First, use a GPUImageCropFilter to extract the rectangular region of your original image. This uses normalized coordinates (0.0 - 1.0), so you'll have to translate from the pixel location and size to these normalized coordinates.
Next, feed the output from the crop filter into a GPUImageAverageColor operation. This will average the pixel color within that region and use the colorAverageProcessingFinishedBlock
that you set as a callback. The block will return to you the average red, green, blue, and alpha channel values for the pixels in that region.
For an example of both of these operations in action, see the FilterShowcase example that comes with the framework.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With