Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I alter my FFMPEG command to make my HTTP Live Streams more efficient?

I want to reduce the muxing overhead when creating .ts files using FFMPEG.

Im using FFMPEG to create a series of transport stream files used for HTTP live streaming.

./ffmpeg -i myInputFile.ismv \
         -vcodec copy \
         -acodec copy \
         -bsf h264_mp4toannexb \
         -map 0 \
         -f segment \
         -segment_time 10\
         -segment_list_size 999999 \
         -segment_list output/myVarientPlaylist.m3u8 \
         -segment_format mpegts \
         output/myAudioVideoFile-%04d.ts

My input is in ismv format and contains a video and audio stream:

Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 320x240, 348 kb/s, 29.97 tbr, 10000k tbn, 59.94 tbc
Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 63 kb/s

There is an issues related to muxing that is causing a large amout of overhead to be added to the streams. This is how the issue was described to me for the audio:

enter image description here

So for a given aac stream, the overhead will be 88% (since 200 bytes will map to 2 x 188 byte packets).

For video, the iframe packets are quite large, so they translate nicely into .ts packets, however, the diffs can be as small as an audio packet, therefore they suffer from the same issue.

The solution is to combine several aac packets into one larger stream before packaging them into .ts. Is this possible out of the box with FFMPEG?

like image 595
Robert Avatar asked Mar 27 '13 14:03

Robert


People also ask

How do I reduce latency in FFmpeg?

Basic using -fflags nobuffer This format flag reduces the latency introduced by buffering during initial input streams analysis. This command will reduce noticeable the delay and will not introduce audio glitches.

How do I stream with FFmpeg via HTTP protocol?

make sure that your feed name ends with ". ffm" and if it's not the case, then add "-f ffm" before your feed URL, to manually specify the output format (because ffmpeg won't be able to figure it out automatically any more), like this "-f ffm http://localhost:8090/blah.bleh".

How do I livestream with FFmpeg?

To start live streaming with FFmpeg, you have to download and install the software on your computer. You can choose the right installation method for your operating system from the three options above in this FFmpeg tutorial. At this point, you can also create a streaming channel on your video hosting platform.

How to start live streaming with ffmpeg?

To start live streaming with FFmpeg, you have to download and install the software on your computer. You can choose the right installation method for your operating system from the three options above. At this point, you can also create a streaming channel on your video hosting platform.

How to preserve quality of a video file using FFmpeg?

If you want to preserve the quality of your source video file, use '-qscale 0' parameter: To check list of supported formats by FFmpeg, run: 3. Converting video files to audio files To convert a video file to audio file, just specify the output format as .mp3, or .ogg, or any other audio formats.

Can FFmpeg convert video files to different formats?

Converting video files to different formats Since FFmpeg is a feature-rich and powerful audio and video converter, so It's possible to convert media files between different formats. Say for example, to convert mp4 file to avi file, run:

How do I change the volume of an audio file in FFmpeg?

Change the volume of audio files FFmpeg allows us to change the volume of an audio file using "volume filter" option. For example, the following command will decrease volume by half. $ ffmpeg -i input.mp3 -af 'volume=0.5' output.mp3


1 Answers

It is not possible. Codecs rely on the encapsulating container for framing, which means to signal the start and length of a frame.

Your graphic actually misses an element, which is the PES packet. Your audio frame will be put into a PES packet first (which indicates its length), then the PES packet will be cut into smaller chunks which will be TS packets.

By design you can not start a new PES packet (containing an audio frame in your case) in a TS packet which already contains data. A new PES packet will always start in a new TS packet. Otherwise it would be impossible to start playing mid-stream (broadcast sitation) - it would be impossible to know on which byte in the TS the new PES begins (remember you have missed the beginning of the current PES).

There are some mitigating factors, the FF FF FF padding will probably be compressed by the networking hardware. Also if you are using HTTP (instead of UDP or RDP), gzip compression can be enabled (but I doubt it would help much).

like image 59
vbence Avatar answered Oct 12 '22 23:10

vbence