Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to minimize the delay in a live streaming with ffmpeg

i have a problem. I would to do a live streaming with ffmpeg from my webcam.

  1. I launch the ffserver and it works.
  2. From another terminal I launch ffmpeg to stream with this command and it works:

    sudo ffmpeg -re -f video4linux2 -i /dev/video0 -fflags nobuffer -an http://localhost:8090/feed1.ffm 
  3. In my configuration file I have this stream:

    <Stream test.webm> Feed feed1.ffm Format webm  NoAudio  VideoCodec libvpx  VideoSize 720x576  VideoFrameRate 25  # Video settings     VideoCodec libvpx     VideoSize 720x576           # Video resolution     VideoFrameRate 25           # Video FPS     AVOptionVideo flags +global_header  # Parameters passed to encoder                                      # (same as ffmpeg command-line parameters)     AVOptionVideo cpu-used 0     AVOptionVideo qmin 10     AVOptionVideo qmax 42     #AVOptionVideo quality good     PreRoll 5      StartSendOnKey     VideoBitRate 400            # Video bitrate  </Stream> 
  4. I launch the stream with

    ffplay http://192.168.1.2:8090/test.webm It works but I have a delay of 4 seconds and I would to minimize this delay because is essential for my application. Thanks

like image 873
Pasquale C. Avatar asked May 20 '13 21:05

Pasquale C.


People also ask

How do I livestream with FFmpeg?

To start live streaming with FFmpeg, you have to download and install the software on your computer. You can choose the right installation method for your operating system from the three options above in this FFmpeg tutorial. At this point, you can also create a streaming channel on your video hosting platform.

How much delay is there in live stream?

According to Encoding.com, most live streams have latency in the 30-120 second range—sometimes more.


2 Answers

I found three commands that helped me reduce the delay of live streams. The first command its very basic and straight forward, the second one it's been combined with other options which might work differently on each environment and the last command it is a hacky version that I found in the documentation It was useful at the beginning but now the first option is more stable.

1. Basic using -fflags nobuffer

This format flag reduces the latency introduced by buffering during initial input streams analysis. This command will reduce noticeable the delay and will not introduce audio glitches.

ffplay -fflags nobuffer -rtsp_transport tcp rtsp://<host>:<port> 

2. Advanced -flags low_delay and other options.

We can combine the previous -fflags nobuffer format flag with other generic options and advanced options for a more elaborated command:

  • -flags low_delay this codec generic flag will force low delay.
  • -framedrop: to drop video frames if video is out of sync. Enabled by default if the master clock is not set to video. Use this option to enable frame dropping for all master clock sources
  • -strict experimental, finally -strict specifies how strictly to follow the standards and the experimental option allows non standardized experimental things, experimental (unfinished/work in progress/not well tested) decoders and encoders. This option is optional and remember that experimental decoders can pose a security risk, do not use this for decoding untrusted input.
ffplay -fflags nobuffer -flags low_delay -framedrop \ -strict experimental -rtsp_transport tcp rtsp://<host>:<port> 

This command might introduce some audio glitches, but rarely.

Also you can try adding: * -avioflags direct to reduce buffering, and * -fflags discardcorrupt to discard corrupted packets, but I think is very aggressive approach. This might break the audio-video synchronization

ffplay -fflags nobuffer -fflags discardcorrupt -flags low_delay \  -framedrop -avioflags direct -rtsp_transport tcp rtsp://<host>:<port> 

3. A hacky option (found on the old documentation)

This is an debugging solution based on setting -probesize and -analyzeduration to low values to help your stream start up more quickly.

  • -probesize 32 sets the probing size in bytes (i.e. the size of the data to analyze to get stream information). A higher value will enable detecting more information in case it is dispersed into the stream, but will increase latency. Must be an integer not lesser than 32. It is 5000000 by default.
  • analyzeduration 0 specifies how many microseconds are analyzed to probe the input. A higher value will enable detecting more accurate information, but will increase latency. It defaults to 5000000 microseconds (5 seconds).
  • -sync ext sets the master clock to an external source to try and stay realtime. Default is audio. The master clock is used to control audio-video synchronization. This means this options sets the audio-video synchronization to a type (i.e. type=audio/video/ext).
ffplay -probesize 32 -analyzeduration 0 -sync ext -rtsp_transport tcp rtsp://<host>:<port> 

This command might introduce some audio glitches sometimes.

The -rtsp_transport can be setup as udp or tcp according to your streaming. For this example I'm using tcp.

like image 88
Teocci Avatar answered Sep 22 '22 02:09

Teocci


FFMpeg's streaming guide has a specific section on how to reduce latency. I haven't tried all their suggestions yet. http://ffmpeg.org/trac/ffmpeg/wiki/StreamingGuide#Latency

They make a particular note about latency ffplay introduces:

By default, ffplay introduces a small latency of its own, Also useful is mplayer with its -nocache for testing latency (or -benchmark). Using the SDL out is also said to view frames with minimal latency: ffmpeg ... -f sdl -

like image 27
Glen Blanchard Avatar answered Sep 26 '22 02:09

Glen Blanchard