Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

TCP or UDP? Delays building up on production for video stream

I am creating a video stream from a camera with FFMPEG and nodejs stream Duplex class.

this.ffmpegProcess = spawn('"ffmpeg"', [

  '-i', '-',
  '-loglevel', 'info',

  /**
   * MJPEG Stream
   */

  '-map', '0:v',
  '-c:v', 'mjpeg',
  '-thread_type', 'frame', // suggested for performance on StackOverflow.
  '-q:v', '20', // force quality of image. 2-31 when 2=best, 31=worst
  '-r', '25', // force framerate
  '-f', 'mjpeg',
  `-`,

], {
  shell: true,
  detached: false,
});

On local network, we are testing it with a few computers and every thing works very stable with latency of maximum 2 seconds.

However, we have imported the service to AWS production, when we connect the first computer we have a latency of around 2 seconds, but if another client connects to the stream, they both start delaying a lot, where the latency builds up to 10+ seconds, and in addition the video is very slow, as in the motion between frames.

Now I asked about TCP or UDP is because we are using TCP for the stream, which means that every packet that is sent is implementing the TCP syn-ack-synack protocol and sequence.

My question is, could TCP really be causing such an issue that the delays get up to 10 seconds with very slow motion? because on local network it works very just fine.

like image 687
Ben Beri Avatar asked Jan 25 '23 09:01

Ben Beri


2 Answers

Yes, TCP is definitely not correct protocol for it. Can be used but with some modification on the source side. UDP sadly is not a magic bullet, without additional logic UDP will not solve the problem either (if you don't care that you will see broken frames constructed randomly from other frames).

Explanation

The main features of the TCP are that packets are delivered in the correct order and all packets are delivered. These two features are very useful but quite harming for video streaming.

On the local network bandwidth is quite large and also packet loss is very low so TCP works fine. On the internet bandwidth is limited and every reach of the limit causes packet loss. TCP will deal with packet loss by resending the packet. Each resend will cause extending delay of the stream.

For better understanding try to imagine that a packet contains whole frame (JPEG image). Let's assume that normal delay of the link is 100ms (time in which frame will be in transit). For 25 FPS you need to deliver each frame every 40ms. If the frame is the lost in transit, TCP will ensure that copy will resend. TCP can detect this situation and fix in ideal situation in 2 times of delay - 2*100ms (in the reality it will be more, but for sake of simplicity I am keeping this). So during image loss 5 frames are waiting in the receiver queue and waits for single missing frame. In this example one missing frame causes 5 frames of delay. And because TCP creates queue of packets. delay will not be ever reduced, only can grow. In the ideal situation when bandwidth is enough delay will be still the same.

How to fix it

What I have done in the nodejs was fixing the source side. Packets in the TCP can be skipped only if the source will do it, TCP has no way how to do it itself.

For that purpose, I used event drain. Idea behind the algorithm is that ffmpeg generates frame it's own speed. And node.js reads frames and always keep the last received. It has also outgoing buffer with size of a single frame. So if sending of single frame is delayed due to network conditions, incoming images from ffmpeg are silently discarded (this compensates low bandwidth) except the last received. So when TCP buffer signals (by drain event) that some frame was correctly sent, nodejs will take the last received image and write it to the stream.

This algorithm regulates itself. If sending bandwidth is enough (sending is faster then frames generated by ffmpeg) then no packet will be discarded and 25fps will be delivered. If bandwidth can transfer only half of frames in average one of two frames will be discarded so receiver will se 12.5fps but not growing delay.

Probably the most complicated part in this algorithm is correctly slicing byte stream to frames.

like image 137
Tomas Avatar answered Mar 05 '23 08:03

Tomas


On either the server or a client, run a wireshark trace, this will tell you the exact "delay" of the packets, or which side is not doing the right thing. It sound's like the server is not living up to your expectations. Make sure the stream is UDP.

like image 31
JWP Avatar answered Mar 05 '23 06:03

JWP