Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Throughput and bandwidth difference?

The throughput of a channel is a measure of amount of data actually moves through the channel. Why is it substantially less than the bandwidth of the channel?

like image 572
Sameera Liaynage Avatar asked Jun 16 '14 14:06

Sameera Liaynage


People also ask

What is the difference between latency bandwidth and throughput?

Throughput: While bandwidth tells you how much data could theoretically be transferred across the network, throughput indicates how much data was transferred from a source at any given time. Latency: This measures the time it takes for data to get to its destination across the network.

What is the difference between bandwidth throughput and Goodput?

It's all in the name – goodput is data that's good, not undesirable data such as retransmissions or overhead data. Throughput includes bandwidth that's wasted or taken up by retransmissions of data that wasn't successfully transmitted in the first place.

What is throughput in a network?

In data transmission, network throughput is the amount of data moved successfully from one place to another in a given time period, and typically measured in bits per second (bps), as in megabits per second (Mbps) or gigabits per second (Gbps).


8 Answers

Bandwidth is the maximum amount of data that can travel through a 'channel'.

Throughput is how much data actually does travel through the 'channel' successfully. This can be limited by a ton of different things including latency, and what protocol you are using.

like image 141
user3699997 Avatar answered Oct 05 '22 11:10

user3699997


Although there are already few answers to this questions but I think some people still may have doubt in actually visualising the differece b/w throughput and bandwidth just like I had ;) until I read this analogy on quora(full credits to that) which proved really helpful

Consider

A highway which has a capacity of moving ,say, 200 vehicles at a time

but

at a random time someone notices only , say, 150 vehicles moving through it..

say due to some traffic-jam in between...

i.e.

capacity is 200 but not all the time it is fully utilised, actual traffic is only 150 out of a max of 200.

i.e. the bandwidth is 200 per unit time but still actual throughput is 150 ...

I thought it might help someone...

like image 23
eRaisedToX Avatar answered Oct 05 '22 10:10

eRaisedToX


The bandwidth of a link is the theoretical maximum amount of data that could be sent over that channel without regard to practical considerations. For example, you could pump 10^9 bits per second down a Gigabit Ethernet link over a Cat-6e or fiber optic cable. Unfortunately this would be a completely unformatted stream of bits.

To make it actually useful there's a start of frame sequence which precedes any actual data bits, a frame check sequence at the end for error detection and an idle period between transmitted frames. All of those occupy what is referred to as "bit times" meaning the amount of time it takes to transmit one bit over the line. This is all necessary overhead, but is subtracted from the total bandwidth of the link.

And this is only for the lowest level protocol which is stuffing raw data out onto the wire. Once you start adding in the MAC addresses, an IP header and a TCP or UDP header, then you've added even more overhead.

Check out http://en.wikipedia.org/wiki/Ethernet_frame. Similar problems exist for other transmission media.

like image 31
Chris Ryding Avatar answered Oct 05 '22 11:10

Chris Ryding


As an analogy consider a water pipe as a channel. Pipe diameter corresponds to bandwidth or capacity, and pipe contents corresponds to throughput or usage. In the following image we can see three pipes (or channels), all of which are under-utilised, hence, usage could be increased without the need for a bigger pipe.

Bandwidth vs. throughput

like image 23
user2768 Avatar answered Oct 05 '22 11:10

user2768


Here is another example to help you imagine what the difference could be depending on the situation.

Imagine

  • You have a 1 MB file (say a photo) that you would like to share with 3 friends.
  • Each of them, however, uses different cloud storage application - Google Drive, Microsoft OneDrive and Dropbox. Therefore they all create a folder and give you access so you can upload the file to their folder.
  • You open the URLs to these three folders in your browser (say in 3 different tabs) and drag-n-drop the file (more or less at the same time) to each tab so to upload it.
  • You are connected to the Internet via a fiber-optic cable which gives you the ability to transfer at speed of 1 Gb per second (i.e. this is your bandwidth - the max capacity that you can utilise your connection).

Knowing that, theoretically (assuming no data lost, no big protocol overhead, connection to the cloud storage services offers at least the same bandwidth, etc.) you can transfer one 1 MB file over the 1 Gbps connection for roughly: 1MB / 1 Gbps = 1 x 10^3 x 8 / 1x10^9 which gives about 8x10^-6 seconds or say roughly 10 ms. Now, you have 1 file that you want to upload to 3 destinations, your connection bandwidth is big enough so you can transfer the same file to 3 destinations at the same time (we can also assume you have a modern laptop equipped with multi-core CPU so data transfer over the 3 connections to Google Drive, MS OneDrive and Dropbox can be done in parallel). Therefore, instead of waiting for 30ms to transfer the same file to 3 different destinations you only have to wait for 10ms as you have a very good bandwidth.

Now let's consider what protocol is being used and what implications that brings. As you use your browser to upload the file, the protocol that is being used is HTTP/S which is running on top of the TCP protocol. An important property of the TCP protocol is that it makes sure that a batch of data has reached any destination successfully before sending the next batch of data. That is being done by the TCP sender waiting for acknowledgement (for short an ACK) that the first batch of data has been received before it starts sending the second batch of data. What this means is that if it takes 0.5s to transfer 1 batch of data one direction and then 0.5s to received an ACK then you need to wait for 1 second until 1 batch of data is transferred and successfully confirmed to have been received (again, let's assume no data lost, therefore no need to re-transfer the same batch). Because of this round trip needed by the TCP protocol there appears to be a blocker. The blocker is the delay you experience for one round trip which includes transferring 1 batch of data and its successful acknowledgement. With that in mind we need to see how big is one batch of data. This can vary but it's usually 64KB. So your actual traffic to 1 destination (i.e, the throughput) is bound by this delay (i.e., latency) and the batch size by the following equation:

throughput = batch size / latency

In our example the throughput is 64KB/s and as we can split 1 MB in roughly 15.6 batches of 64KB size, you will need about 15.6 seconds to transfer 1 MB of file. That is a major slowdown compared to the bandwidth-only based calculations we made early.

like image 23
Ivan Hristov Avatar answered Oct 05 '22 09:10

Ivan Hristov


Imagine it this way: a mail truck can carry 5000 sheets of paper each trip so It's bandwidth is 5000. Does that mean it can carry 5000 letter each trip? Well, theoretically, if each letter didn't need an envelope telling us where it was coming from, going too, and possessing proof of payment (Envelope = Protocol Headers and Footers). But they do, so each letter (1 sheet of paper) requires an envelope (= to about 1 sheet of paper) to get it to it's destination. So in the worst case scenario (all envelopes only have one page letters), the truck would carry only 2500 sheets Throughput (Data that we want to send from source>destination, THE LETTERS) and would have 2500 sheets Overhead (Headers/Footer that we need to get the letter from source>destination but that the recipient won't be reading, THE ENVELOPES). The Throughput, 2500 Letters + the Overhead, 2500 Envelopes = Bandwidth, 5000 sheets of paper. Bigger letters (4 pages) still only require 1 envelope so that would move the ratio of Throughput to Overhead higher (i.e. Jumbo Frames) and make it more efficient, so if all the letters were 4 page letters throughput would change to 4000, and overhead would reduce to 1000, together equaling the 5000 Bandwidth of the truck.

like image 40
HoakieBS Avatar answered Oct 05 '22 11:10

HoakieBS


  • Bandwidth - theoretical maximum units of work per unit of time
  • Throughput - actual units of work per unit of time

As opposed to the time per unit of work (speed/latency).

This question in network engineering stack exchange contains good responses: https://networkengineering.stackexchange.com/questions/10504/what-is-the-difference-between-data-rate-and-latency

like image 27
Boris Avatar answered Oct 05 '22 11:10

Boris


In most cases with "bandwidth" and "throughput" it is OVER complicated; like trying to learn calculus in one day. There is NO need for this, in MOST cases when referencing "Bandwidth" and "Throughput".

All you need to know in MOST cases is this:

"MB" means mega "BYTES"; OR 8 bits and 8 bits and 8 bits, etc; is being sent down the line. Mb means mega "bits". OR a single bit and bit and bit, etc; down the line.

Example: IF your carrier says this is a "6 Mb line"; it means that is the maximum Bandwidth. More succinctly it means that you ONLY are going to benefit 750 kilobytes per/sec "throughput". Now why? Because the line is only sending a series of "bits", which uses 8 bits/sec to create a byte. Thus; you must divide bits/sec by 8 to get to bytes/sec. Thus: a 6Mb line can ONLY deliver 750 thousand bytes/sec.

Another example: I just got a fiber optic line from A T & T; and they LOVE to talk about "bits". So they advertise a whopping "100 mega bits per second". Big deal. Because that is only 12.5 "MBytes/per second.

Remember, EACH "character" on your keyboard or printed on the screen, etc, requires 8 bits; for the other end to "distinguish" what character it is, etc.

So even though I have a "Gargantuan" fiber line touted as "100Mb"; it is really only 12.5 MBytes (characters) per second (100 divided by 8).

Worse: MOST interchange the terms "MB" and "Mb". Worse yet; EVEN The technician that installed the Fiber Optic line and router in my home, did not know what the terms meant. So he thought, and his co-workers (according to him) believed the same. IE: That 100Mb line was a 100MB line. This is very sad.

A T & T reps on the phone rarely know the difference either. Even some of their supervisors do not know it either. Even sadder.

To summarize: "Bandwidth" uses "bits". "Throughput" uses "bytes". And...one byte takes up 8 bits. So again: a 100Mb line (bandwidth) can ONLY produce 12.5 MBytes/sec (throughput).

For whatever it's worth.

like image 20
patdee Avatar answered Oct 05 '22 11:10

patdee