On Raspbian (Raspberry Pi 2), the following minimal example stripped from my script correctly produces an mp4 file:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation
def anim_lift(x, y):
#set up the figure
fig = plt.figure(figsize=(15, 9))
def animate(i):
# update plot
pointplot.set_data(x[i], y[i])
return pointplot
# First frame
ax0 = plt.plot(x,y)
pointplot, = ax0.plot(x[0], y[0], 'or')
anim = animation.FuncAnimation(fig, animate, repeat = False,
frames=range(1,len(x)),
interval=200,
blit=True, repeat_delay=1000)
anim.save('out.mp4')
plt.close(fig)
# Number of frames
nframes = 200
# Generate data
x = np.linspace(0, 100, num=nframes)
y = np.random.random_sample(np.size(x))
anim_lift(x, y)
Now, the file is produced with good quality and pretty small file size, but it takes 15 minutes to produce a 170 frames movie, which is not acceptable for my application. i'm looking for a significant speedup, video file size increase is not a problem.
I believe the bottleneck in the video production is in the temporary saving of the frames in png format. During processing I can see the png files apprearing in my working directory, with the CPU load at 25% only.
Please suggest a solution, that might also be based on a different package rather than simply matplotlib.animation
, like OpenCV
(which is anyway already imported in my project) or moviepy
.
Versions in use:
Matplotlib 3.4 update: The solution below can be adapted to work with the latest matplotlib versions. However, there seems to have been major performance improvements since this answer was first written and the speed of matplotlib's FFMpegWriter
is now similar to this solution's writer.
Original answer:
The bottleneck of saving an animation to file lies in the use of figure.savefig()
. Here is a homemade subclass of matplotlib's FFMpegWriter
, inspired by gaggio's answer. It doesn't use savefig
(and thus ignores savefig_kwargs
) but requires minimal changes to whatever your animation script are.
from matplotlib.animation import FFMpegWriter
class FasterFFMpegWriter(FFMpegWriter):
'''FFMpeg-pipe writer bypassing figure.savefig.'''
def __init__(self, **kwargs):
'''Initialize the Writer object and sets the default frame_format.'''
super().__init__(**kwargs)
self.frame_format = 'argb'
def grab_frame(self, **savefig_kwargs):
'''Grab the image information from the figure and save as a movie frame.
Doesn't use savefig to be faster: savefig_kwargs will be ignored.
'''
try:
# re-adjust the figure size and dpi in case it has been changed by the
# user. We must ensure that every frame is the same size or
# the movie will not save correctly.
self.fig.set_size_inches(self._w, self._h)
self.fig.set_dpi(self.dpi)
# Draw and save the frame as an argb string to the pipe sink
self.fig.canvas.draw()
self._frame_sink().write(self.fig.canvas.tostring_argb())
except (RuntimeError, IOError) as e:
out, err = self._proc.communicate()
raise IOError('Error saving animation to file (cause: {0}) '
'Stdout: {1} StdError: {2}. It may help to re-run '
'with --verbose-debug.'.format(e, out, err))
I was able to create animation in half the time or less than with the default FFMpegWriter
.
You can use is as explained in this example.
The code above will work with matplotlib 3.4 and above if you change the last line of the try
block to:
self._proc.stdin.write(self.fig.canvas.tostring_argb())
i.e. using _proc.stdin
instead of _frame_sink()
.
A much improved solution is based on the answers to this post reduces the time by a factor of 10 approximately.
import numpy as np
import matplotlib.pylab as plt
import matplotlib.animation as animation
import subprocess
def testSubprocess(x, y):
#set up the figure
fig = plt.figure(figsize=(15, 9))
canvas_width, canvas_height = fig.canvas.get_width_height()
# First frame
ax0 = plt.plot(x,y)
pointplot, = plt.plot(x[0], y[0], 'or')
def update(frame):
# your matplotlib code goes here
pointplot.set_data(x[frame],y[frame])
# Open an ffmpeg process
outf = 'testSubprocess.mp4'
cmdstring = ('ffmpeg',
'-y', '-r', '1', # overwrite, 1fps
'-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
'-pix_fmt', 'argb', # format
'-f', 'rawvideo', '-i', '-', # tell ffmpeg to expect raw video from the pipe
'-vcodec', 'mpeg4', outf) # output encoding
p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)
# Draw frames and write to the pipe
for frame in range(nframes):
# draw the frame
update(frame)
fig.canvas.draw()
# extract the image as an ARGB string
string = fig.canvas.tostring_argb()
# write to pipe
p.stdin.write(string)
# Finish up
p.communicate()
# Number of frames
nframes = 200
# Generate data
x = np.linspace(0, 100, num=nframes)
y = np.random.random_sample(np.size(x))
testSubprocess(x, y)
I suspect further speedup might be obtained similarly by piping the raw image data to gstreamer which is now able to benefit from hardware encoding on the Raspberry Pi, see this discussion.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With