Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

OpenGL to video on iPhone

I'm currently working on a project to convert a physics simulation to a video on the iPhone itself.

To do this, I'm presently using two different loops. The first loop runs in the block where the AVAssetWriterInput object polls the EAGLView for more images. The EAGLView provides the images from an array where they are stored.

The other loop is the actual simulation. I've turned off the simulation timer, and am calling the tick myself with a pre-specified time difference every time. Everytime a tick gets called, I create a new image in EAGLView's swap buffers method after the buffers have been swapped. This image is then placed in the array that AVAssetWriter polls.

There is also some miscellaneous code to make sure the array doesn't get too big

All of this works fine, but is very very slow.

Is there something I'm doing that is, conceptually, causing the entire process to be slower than it could be? Also, does anyone know of a faster way to get an image out of Open GL than glReadPixels?

like image 908
aToz Avatar asked Jan 27 '11 05:01

aToz


People also ask

Can you use OpenGL on iPhone?

OpenGL ES Is a Platform-Neutral API Implemented in iOS Working with OpenGL ES on iOS requires using iOS classes to set up and present a drawing surface and using platform-neutral API to render its contents.

Does Apple support OpenGL?

OpenGL ES in particular will run on every Mac, iOS device, Android device, Windows device, and Linux device.

Is metal like OpenGL?

OpenGL ES is a cross-platform API designed especially for embedded systems; such as smartphones, computer tablets, video game consoles and personal digital assistants (PDAs). Metal is a low-level, low-overhead hardware- accelerated graphics and compute API that debuted in iOS 8.

What is OpenGL and metal?

Metal belongs to the category of “modern low-level graphics API” (which also includes Vulkan and Direct3D 12), while OpenGL is an older higher-level graphics API (which also includes Direct3D 11 and previous).


3 Answers

Video memory is designed so, that it's fast for writing and slow for reading. That's why I perform rendering to texture. Here is the entire method that I've created for rendering the scene to texture (there are some custom containers, but I think it's pretty straightforward to replace them with your own):

-(TextureInf*) makeSceneSnapshot {
    // create texture frame buffer
    GLuint textureFrameBuffer, sceneRenderTexture;

    glGenFramebuffersOES(1, &textureFrameBuffer);
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, textureFrameBuffer);

    // create texture to render scene to
    glGenTextures(1, &sceneRenderTexture);
    glBindTexture(GL_TEXTURE_2D, sceneRenderTexture);

    // create TextureInf object
    TextureInf* new_texture = new TextureInf();
    new_texture->setTextureID(sceneRenderTexture);
    new_texture->real_width = [self viewportWidth];
    new_texture->real_height = [self viewportHeight];

    //make sure the texture dimensions are power of 2
    new_texture->width = cast_to_power(new_texture->real_width, 2);
    new_texture->height = cast_to_power(new_texture->real_height, 2);

    //AABB2 = axis aligned bounding box (2D)
    AABB2 tex_box;

    tex_box.p1.x = 1 - (GLfloat)new_texture->real_width / (GLfloat)new_texture->width;
    tex_box.p1.y = 0;
    tex_box.p2.x = 1;
    tex_box.p2.y = (GLfloat)new_texture->real_height / (GLfloat)new_texture->height;
    new_texture->setTextureBox(tex_box);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,  new_texture->width, new_texture->height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
    glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, sceneRenderTexture, 0);

    // check for completness
    if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) {
        new_texture->release();
        @throw [NSException exceptionWithName: EXCEPTION_NAME
                                       reason: [NSString stringWithFormat: @"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES)]
                                     userInfo: nil];
        new_texture = nil;
    } else {
        // render to texture
        [self renderOneFrame];
    }

    glDeleteFramebuffersOES(1, &textureFrameBuffer);

    //restore default frame and render buffers
    glBindFramebufferOES(GL_FRAMEBUFFER_OES, _defaultFramebuffer);
    glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
    glEnable(GL_BLEND);         
    [self updateViewport];      
    glMatrixMode(GL_MODELVIEW);


    return new_texture;
}

Of course, if you're doing snapshots all the time, then you'd better create texture frame and render buffers only once (and allocate memory for them).

like image 77
Max Avatar answered Sep 19 '22 07:09

Max


One thing to remember is that the GPU is running asynchronously from the CPU, so if you try to do glReadPixels immediately after you finish rendering, you'll have to wait for commands to be flushed to the GPU and rendered before you can read them back.

Instead of waiting synchronously, render snapshots into a queue of textures (using FBOs like Max mentioned). Wait until you've rendered a couple more frames before you deque one of the previous frames. I don't know if the iPhone supports fences or sync objects, but if so you could check those to see if rendering has finished before reading the pixels.

like image 41
JohnB Avatar answered Sep 17 '22 07:09

JohnB


You could try using a CADisplayLink object to ensure that your drawing rate and your capture rate correspond to the device's screen refresh rate. You might be slowing down the execution time of the run loop by refreshing and capturing too many times per device screen refresh.

Depending on your app's goals, it might not be necessary for you to capture every frame that you present, so in your selector, you could choose whether or not to capture the current frame.

like image 27
commanda Avatar answered Sep 18 '22 07:09

commanda