I am getting a bunch of relatively small pages from a website and was wondering if I could somehow do it in parallel in Bash. Currently my code looks like this, but it takes a while to execute (I think what is slowing me down is the latency in the connection).
for i in {1..42}
do
wget "https://www.example.com/page$i.html"
done
I have heard of using xargs, but I don't know anything about that and the man page is very confusing. Any ideas? Is it even possible to do this in parallel? Is there another way I could go about attacking this?
Much preferrable to pushing wget
into the background using &
or -b
, you can use xargs
to the same effect, and better.
The advantage is that xargs
will synchronize properly with no extra work. Which means that you are safe to access the downloaded files (assuming no error occurs). All downloads will have completed (or failed) once xargs
exits, and you know by the exit code whether all went well. This is much preferrable to busy waiting with sleep
and testing for completion manually.
Assuming that URL_LIST
is a variable containing all the URLs (can be constructed with a loop in the OP's example, but could also be a manually generated list), running this:
echo $URL_LIST | xargs -n 1 -P 8 wget -q
will pass one argument at a time (-n 1
) to wget
, and execute at most 8 parallel wget
processes at a time (-P 8
). xarg
returns after the last spawned process has finished, which is just what we wanted to know. No extra trickery needed.
The "magic number" of 8 parallel downloads that I've chosen is not set in stone, but it is probably a good compromise. There are two factors in "maximising" a series of downloads:
One is filling "the cable", i.e. utilizing the available bandwidth. Assuming "normal" conditions (server has more bandwidth than client), this is already the case with one or at most two downloads. Throwing more connections at the problem will only result in packets being dropped and TCP congestion control kicking in, and N downloads with asymptotically 1/N bandwidth each, to the same net effect (minus the dropped packets, minus window size recovery). Packets being dropped is a normal thing to happen in an IP network, this is how congestion control is supposed to work (even with a single connection), and normally the impact is practically zero. However, having an unreasonably large number of connections amplifies this effect, so it can be come noticeable. In any case, it doesn't make anything faster.
The second factor is connection establishment and request processing. Here, having a few extra connections in flight really helps. The problem one faces is the latency of two round-trips (typically 20-40ms within the same geographic area, 200-300ms inter-continental) plus the odd 1-2 milliseconds that the server actually needs to process the request and push a reply to the socket. This is not a lot of time per se, but multiplied by a few hundred/thousand requests, it quickly adds up.
Having anything from half a dozen to a dozen requests in-flight hides most or all of this latency (it is still there, but since it overlaps, it does not sum up!). At the same time, having only a few concurrent connections does not have adverse effects, such as causing excessive congestion, or forcing a server into forking new processes.
Just running the jobs in the background is not a scalable solution: If you are fetching 10000 urls you probably only want to fetch a few (say 100) in parallel. GNU Parallel is made for that:
seq 10000 | parallel -j100 wget https://www.example.com/page{}.html
See the man page for more examples: http://www.gnu.org/software/parallel/man.html#example__download_10_images_for_each_of_the_past_30_days
You can use -b
option:
wget -b "https://www.example.com/page$i.html"
If you don't want log files, add option -o /dev/null
.
-o FILE log messages to FILE.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With