I need to maximize speed while converting videos using FFmpeg to h264
Of course, there are a whole bunch of options that can be tweaked but this question is particularly about choosing the best -thread <count>
option. I am trying to find an ideal thread count as a function of
I am aware the default -thread 0
follows one-thread-per-core approach which is supposed to be optimal. But I am not sure if this is time or space-optimized. Also, on certain testcases, I've seen more threads (say 4 threads on my dual core test machine) finishes quicker than the default.
Any other direction, say configure options w.r.t. threads, worth pursuing?
Long answer follows: FFmpeg always has one main thread which does most of the processing. In case of multiple inputs there are also input threads for demuxing (1 thread per input); for single input demuxing is done on main thread.
FFmpeg uses multi-threading by default, so you prob. don't need -threads 0 . If your encode is bottlenecked on a single-threaded filter or decoder, you'll see full load on one core, and light load on many other cores.
FFmpeg itself is not multithreading safe in the sense that you shouldn't call av_read_frame or avcodec_decode_audio4 on the same context from different threads at the same time - but that is mostly obvious.
I have found that threads
do not do a good job of utilizing all the cores, the hyper-threads do not get used at all. One solution I could come up with is to run a 3 to 4 ffmpeg processes in parallel, See: https://superuser.com/questions/538164/how-many-instances-of-ffmpeg-commands-can-i-run-in-parallel/547340#547340 This approach ends up using all the cores fully and is faster than the single input, multiple outputs in a single command option.
If your 'dual-core' has hyperthreading, then 2x cores would probably be correct. There's unlikely to be gain going beyond the number of virtual cores (inc. hyperthreading), but perhaps due to internal issues in FFmpeg it might be true.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With