I have implemented an N-body simulation using the Barnes-Hut optimisation in Python which runs at a not-unacceptable speed for N=10,000 bodies, but it's still too slow to watch real time.
A new frame is generated each time-step, and to display the frame we must first calculate the new positions of the bodies, and then draw them all. For N=10,000, it takes about 5 seconds to generate one frame (this is waaay too high as Barnes-Hut should be giving better results). The display is done through the pygame module.
I would thus like to record my simulation and replay it once after it's done at a higher speed.
How can I accomplish this without slowing down the program or exceeding memory limitations?
One potential solution is simply to save the pygame screen each timestep, but this is apparently very slow.
I thought also about storing the list of positions of the bodies generated each time step, and then redrawing all the frames once the situation finishes. Drawing a frame still takes some time, but not as much time as it takes to calculate the new positions.
You're comparing pure python to various programs which call into compiled code somewhere. Pure python is orders of magnitude slower than code produced by optimizing compilers. Putting the language wars aside, there are cases python performs incredibly fast for a scripting language, and there are cases where it performs slowly.
Many of the demanding python projects I've made have required the use of numpy/pandas/scipy
, or interpreters such as Pypy in order to compile some python code for a pretty immediate improvement to their execution speed. Compilers tend to produce faster code because they can do optimizations offline rather than trying to perform them with the time pressure of runtime.
A video file is the most versatile and easy to manage format for playback, but does require a bit of glue code. To make one, you need a library to encode your visualization frames into video frames. It seems you are already able to generate images per frame, so the only step remaining is to find a video codec.
FFMPEG can be called through its commandline interface to dump your frames into a video file: http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
The example code for writing is:
command = [ FFMPEG_BIN,
'-y', # (optional) overwrite output file if it exists
'-f', 'rawvideo',
'-vcodec','rawvideo',
'-s', '420x360', # size of one frame
'-pix_fmt', 'rgb24',
'-r', '24', # frames per second
'-i', '-', # The imput comes from a pipe
'-an', # Tells FFMPEG not to expect any audio
'-vcodec', 'mpeg'",
'my_output_videofile.mp4' ]
pipe = sp.Popen( command, stdin=sp.PIPE, stderr=sp.PIPE)
Using this, you can dump a frame using this, if you use numpy arrays:
pipe.proc.stdin.write( image_array.tostring() )
This approach has been wrapped by the ffmpy library.
There is also a simple option, but it sacrifices the versatility of a video file (and the really impressive lossy compression algos). Dump your visualization frames into a file as they are produced. Modify your visualizer to read frames from a file and play them at a specified rate.
It is a straightforward method that I've used in the past to save replay data to watch later when I played vindinium, a multiplayer game for bots.
A special mention should be made for memoization, which is extremely well-suited to mathematical computations. Just caching the results of a function defined recursively, you save lots of unnecessary computation at a slight memory cost. Barnes-Hut seems to have a recursive aspect, so you should examine the possibility of memoizing that part.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With