Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

iOS Image Manipulation (Distortion)

I initially approached this issue with CoreImage in mind (because I also need to do facial recognition), but realized that, unfortunately, the CI Distortion filters are not yet included on the iPhone.

I attempted to dive into GLImageProcessing, CImg, and ImageMagick, though I've had a lot of trouble finding a starting point for learning any of these.

Given the number of apps out there that do image distortion, I know this can't be incredibly difficult.

I don't know C or C++, and don't have the time to learn those languages unless absolutely necessary. It would become necessary if one of those libraries is the definitive library for handling this task.

Does anyone have experience with any of these libraries?

Any books out there that cover this for iOS5 specifically?

Resources I've found:

  • GLImageProcessing sample project https://developer.apple.com/library/ios/#samplecode/GLImageProcessing/Introduction/Intro.html

  • ImageMagick & MagickWand http://www.imagemagick.org/script/magick-wand.php

  • CImg http://cimg.sourceforge.net/

  • Simple iPhone image processing http://code.google.com/p/simple-iphone-image-processing/

like image 908
Matisse VerDuyn Avatar asked Feb 16 '12 21:02

Matisse VerDuyn


1 Answers

As you say, the current capabilities of Core Image are a little limited on iOS. In particular, the lack of custom kernels like you find on the desktop is disappointing. The other alternatives you list (with the exception of GLImageProcessing, which wouldn't be able to do this kind of filtering) are all CPU-bound libraries and would be much too slow for doing live filtering on a mobile device.

However, I can point you to an open source framework called GPUImage that I just rolled out because I couldn't find something that let you pull off custom effects. As its name indicates, GPUImage does GPU-accelerated processing of still images and video using OpenGL ES 2.0 shaders. You can write your own custom effects using these, so you should be able to do just about anything you can think of. The framework itself is Objective-C, and has a fairly simple interface.

As an example of a distortion filter, the following shader (based on the code in Danny Pflughoeft's answer) does a sort of a fisheye effect:

varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;

const mediump float bulgeFactor = 0.5;

void main()
{
    mediump vec2 processedTextureCoordinate = textureCoordinate - vec2(0.5);
    mediump float radius = processedTextureCoordinate.x * processedTextureCoordinate.x + processedTextureCoordinate.y * processedTextureCoordinate.y;
    mediump vec2 distortedCoordinate = vec2(pow(radius, bulgeFactor)) * processedTextureCoordinate + vec2(0.5);

    gl_FragColor = texture2D(inputImageTexture, distortedCoordinate);
}

This produces this kind of effect on a video stream:

Fisheye effect filter

In my benchmarks, GPUImage processes images 4X faster than Core Image on an iPhone 4 (6X faster than CPU-bound processing) and video 25X faster than Core Image (70X faster than on the CPU). In even the worst case I could throw at it, it matches Core Image for processing speed.

The framework is still fairly new, so the number of stock filters I have in there right now is low, but I'll be adding a bunch more soon. In the meantime, you can write your own custom distortion shaders to process your images, and the source code for everything is available for you to tweak as needed. My introductory post about it has a little more detail on how to use this in your applications.

like image 195
Brad Larson Avatar answered Oct 28 '22 21:10

Brad Larson