Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to only barely use the bandwidth available to your program

I am making a program that will download a bunch of different items. My language has cheap concurrency, so I first thought that I could download them all at once. Problem is using concurrency you don't have is bad. If I tried downloading them all at once, the user would have to wait for all of them before getting any of them.

Let's say that you are downloading 10 items that can be downloaded at 7 mb/s, and you have 20 mb/s of download speed. The program should start downloading only the first three items, and will only start downloading new items once old items have finished and there is bandwidth. Note also that in general the items won't have the same download speed.

If I had some programmatic way to check network saturation, this would be easy (just check if its saturated before spawning new threads.)

like image 750
PyRulez Avatar asked Apr 18 '15 12:04

PyRulez


People also ask

How do I keep apps from using so much bandwidth?

You can turn off some of these apps by going to Settings > Privacy > Background apps. Then toggle off apps that use background data that you don't need. In my case, it was pretty much all of those apps that are listed.

How do I stop my computer from using all the bandwidth?

Rightclick any open area on your bottom taskbar and then select "Task Manager", or hold Ctrl and Alt then press Delete and select "Task Manager" from the menu. Select the "Startup" tab, find "Microsoft OneDrive" rightclick and disable, or just select it and click the "Disable" button on the bottom right.


1 Answers

As has been pointed out in the comments, you can't do this well enough to make any guarantees. But, suppose you want to do your best, anyways.

There are two parts to this problem:

  1. Determine the bandwidth available.
  2. Control the bandwidth being consumed.

Roughly controlling the bandwidth consumed can be accomplished in a user space program by limiting the rate at which you read from the socket. The TCP/IP stack will notify the other end of the connection that the queue is is maintaining on your application's behalf has gotten full, and nothing more will be sent. A convenient way of implementing that rate limiting is with token buckets.

Quick token bucket implementation:

 int bucket = 0;
 start_thread({ while(transfer_in_progress) {
                    bucket += bytes_per_second_limit;
                    sleep(1);
                });
 while(transfer_in_progress) {
    bytesread = read(socket, buffer, min(bucket, buffersize), );
    bucket -= bytesread;
 }

If bytes_per_second_limit is set to roughly the bandwidth available, expressed in bytes/second, then that should read as fast as the connection allows. If the connection is faster, you'll be limited to bytes_per_second_limit. If the connection is slower, then bucket will grow forever, at a rate proportional to the difference between the speed limit, and the available bandwidth.

Hmm!

If you run another thread, and keep an eye on bucket, you can watch for two conditions:

  1. If bucket is always 0, then there is more bandwidth available, and you can increase bytes_per_second_limit, limited perhaps by your most recent best guess for available bandwidth (from #2). Or start an additional download.
  2. If bucket is larger than the last time you looked, and the last few seconds of data points appear to indicate continuing growth (maybe do a linear regression; whatever you like), the rate of that growth expressed in bytes/second is how much you can reduce bytes_per_second_limit by in order to match your download rate with the available bandwidth.

The problem with all this is that there is no guarantee your bandwidth will stay constant. The thread monitoring bucket might bounce back and forth between increasing the rate, and limiting it. I'd suggest you start by averaging over at least 10 or 20 seconds before making rate limit changes.

like image 199
Jay Kominek Avatar answered Nov 15 '22 07:11

Jay Kominek