Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Proper way to calculate Link Throughput

I have read some articles online and I got a pretty good idea about the TCP and UDP in general. However, I still have some doubts which I am sure not completely clear to me.

What is the proper way to calculate throughput ?

(Can't we just divide Total number of bytes received by total time taken ?)

What is that key feature in TCP that makes it have much much higher throughput than UDP ?

UPDATE:

I understood that TCP uses windows which is nothing but that much segments can be sent before actually waiting for Acknowledgements. But my doubt is that in UDP segments are continuously sent without even bothering about Acknowledgements. So there is no extra overheads in UDP. Then, why the throughput of TCP is much much higher than that of UDP ?

Lastly,

Is this true ?

TCP throughput = (TCP Window Size / RTT) = BDP / RTT = (Link Speed in Bytes/sec * RTT)/RTT = Link Speed in Bytes/sec

If so then TCP throughput is always equals to the Know Link speed. And since the RTTs cancels out each other, the TCP throughput does not even depends on RTT.

I have seen in some network analysis tools like iperf, passmark performance test etc. that the TCP/UDP Throughput changes with Block size.

How is throughput dependent on Block size ? Is Block size equals TCP window or UDP datagram size ?

like image 780
user3243499 Avatar asked Mar 24 '16 05:03

user3243499


1 Answers

What is the proper way to calculate throughput?

There are multiple ways, depending on what exactly you want to measure. They all boil down to dividing some number of bits (or bytes) to some duration, as you mention; what varies is which bits you are counting or (more rarely) which moments of time you are considering for measuring the duration.

The factors you need to take into account are:

At which layer in the network stack are you measuring throughput?

If you measure at the application layer, all that matters is what useful data you transmit to the other endpoint. For example, if you are transferring a file of 6 kB, the amount of data you count when measuring throughput is 6 kB (that is 6,000 bytes, not bits, and note the multiplier of 1000, not 1024; these conventions are common in networking).

This is usually called goodput and it may be different from what is actually sent at the transport layer (as in TCP or UDP), for two reasons:

1. Overhead due to headers

Each layer in the network adds a header to the data that introduces some overhead due to its transmission time. Moreover, the transport layer breaks your data into segments; this is because the network layer (as in IPv4 or IPv6) has a maximum packet size called MTU, typically 1,500 B in Ethernet networks. This value includes the network layer header size (e.g. the IPv4 header, which is variable in length but usually 20 B long) and the transport layer header (for TCP, it is also variable in length but usually 40 B long). This leads to a maximum segment size MSS (number of data bytes, without headers, in one segment) of 1500 - 40 - 20 = 1440 bytes.

Thus if we want to send 6 kB of application-layer data, we must break it into 6 segments, 4 of 1440 bytes each and one of 240 bytes. However at the network layer we end up sending 6 packets, 4 of 1500 bytes each and one of 300 bytes, for a total of 6.3 kB.

Here I have not considered the fact that the link layer (as in Ethernet) adds its own header and possibly also a suffix, which increases the overhead further. For Ethernet this is 14 bytes for the Ethernet header, optionally 4 bytes for VLAN tag, then a CRC of 4 bytes and a gap of 12 bytes, for a total of 36 bytes per packet.

If you consider a fixed-rate link, say of 10 Mb/s, depending on what you measure you will get a different throughput. Normally you want one of these:

  • The goodput, i.e. application layer throughput, if what you want to measure is application performance. For this example, you divide 6 kB by the transfer duration.
  • The link-layer throughput, if what you want to measure is network performance. For this example, you divide 6 kB + TCP overhead + IP overhead + Ethernet overhead = 6.3 kB + 5 * 36 B = 6516 B by the transfer duration.

Retransmission overheads

The Internet is a best-effort network, meaning that the packets will be delivered if possible, but may also be dropped. Packet drops are corrected by the transport layer, in case of TCP; for UDP, there is no such mechanism, which means that either the application does not care if some parts of the data do not get delivered, or the application implements retransmission itself on top of UDP.

Retransmission reduce goodput for two reasons:

a. Some data needs to be sent again, which takes time. This introduces a delay which is inversely proportional to the rate of the slowest link in the network between the sender and the receiver (a.k.a the bottleneck link). b. Detecting that some data was not delivered needs feedback from the receiver to the sender. Due to propagation delays (sometimes called latency; caused by the finite speed of light in the cable), feedback can only be received by the sender with some latency, which slows down the transmission even more. In most practical cases, this is the most significant contribution to the extra delay caused by the retransmission.

Clearly, if you use UDP instead of TCP and you do not care about packet loss, you will of course get better performance. But for many applications, data loss cannot be tolerated, so such a measurement is meaningless.

There are some applications that do use UDP for transferring data. One is BitTorrent, which may use either TCP or a protocol they designed called uTP, which emulates TCP on top of UDP, but aims at being more efficient with many parallel connections. Another transport protocol implemented over UDP is QUIC, which also emulates TCP and offers multiplexing multiple parallel transfers over a single connection, and forward error correction to reduce retransmissions.

I will discuss forward error correction a little since it is related to your question about throughput. A naive way of implementing it is by sending every packet twice; in case one gets lost, the other still has a chance of being received. This reduces the amount of retransmissions to half, but also halves your goodput since you send redundant data (note that the network or link layer throughput remains the same!). In some cases this is fine; especially if the latency is very large, such as on intercontinental or satellite links. Moreover, some mathematical methods exist where you don't have to send a full copy of the data; for instance for every n packets you send, you send another reduntant one which is the XOR (or some other arithmetic operation) of them; if the redundant one gets lost, it doesn't matter; if one of the n packets gets lost, you can reconstruct it based on the redundant one and the other n-1. You can thus configure the overhead introduced by forward error correction to whatever amount of bandwidth you can spare.

How you are measuring the transfer time

Is the transfer completed when the sender finished sending the last bit over the wire, or does it also include the time it takes for the last bit to travel to the receiver? Additionally, does it include the time it takes to get a confirmation from the receiver, stating that all data has been received successfully and no retransmission is neede?

It really depends on what you want to measure. Note that for large transfers, one extra round-trip-time is insignificant in most cases (unless you are communicating, for instance, with a probe on Mars).

What is that key feature in TCP that makes it have much much higher throughput than UDP?

This is not true, although a common misconception.

In addition to retransmitting data when needed, TCP will also adjust its sending rate so that it will not cause packet drops by congesting the network. The adjustment algorithm has been perfected over decades, and usually converges quickly to the maximum rate supported by the network (actually, the bottleneck link). For this reason it is usually difficult to beat TCP in throughput.

With UDP, there is no rate limiting at the sender. UDP lets the application send as much as it wants. But if you try to send more than the network can handle, some of the data will be dropped, lowering your throughput, and also making the admin of the network you are congesting very angry. This means that sending UDP traffic at high rates is impractical (unless the goal is to DoS a network).

Some media applications are using UDP but rate-limiting the transfer at the sender at a very small rate. This is typically used in VoIP applications or Internet Radio, where you require very little throughput but low latency. I suppose this is one of the reasons for the misconception that UDP is slower than TCP; that is not the case, UDP can be as fast as the network allows.

As I said before, there are protocols such as uTP or QUIC, implemented over UDP, which achieve performance similar to TCP.

Is this true ?

TCP throughput = (TCP Window Size / RTT)

Without packet loss (and retransmissions), this is correct.

TCP throughput = BDP / RTT = (Link Speed in Bytes/sec * RTT)/RTT = Link Speed in Bytes/sec

This is correct only if the window size is configured to the optimal value. BDP/RTT is the optimal (maximum possible) transfer rate in the network. Most modern operating systems should be able to auto-configure it optimally.

How is throughput dependent on Block size ? Is Block size equals TCP window or UDP datagram size?

I don't see any block size in the iperf documentation.

If you refer to the TCP window size, if it is smaller than BDP, then your throughput will be suboptimal (because you waste time waiting for ACKs instead of sending more data; if needed I can explain further). If it is equal or higher to the BDP, then you achieve optimal throughput.

like image 111
o9000 Avatar answered Oct 18 '22 02:10

o9000