I want to be able to use OpenGL to create a video output file instead of the usual display on screen output. I am thinking by not using glutPostRedisplay() or (SFML version, which is something like this:) window.Display(), and somehow using glReadPixels() instead.
glReadPixels puts the pixel content into an array in memory (as you might already know) but how can I convert that into a frame, and string several frames together in a video file? And what format would the video file be in, so that I can play it?
I should explain why I want to do this: A lot of physics simulations can take a very long time to calculate enough information to display one frame, so it's better to leave it running overnight and play the video file the next morning. You wouldn't want to keep coming back every 5 minutes to see what had happened.
glutPostRedisplay is part of GLUT and not something specific to OpenGL.
You normally do offscreen rendering using either a PBuffer or using a hidden window and a framebuffer object.
Converting a image into a video can be done in various ways. For example you could utilize FFmpeg using image2pipe as input format and write the raw image data to the ffmpeg process standard input. A much simpler scheme would be to dump each frame into a separate image file. Using libpng is straightforward. You can then merge the individual images into a video.
However when doing physics simulation you should not dump the final rendering into a file. Instead you should store the simulation data for later playback, as you then can adjust rendering parameters without having to re-simulate everything. And you will have to make adjustments to the renderer!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With