A page contains links to a set of .zip files, all of which I want to download. I know this can be done by wget and curl. How is it done?
The basic syntax: Grab file with curl run: $ curl https://your-domain/file.pdf. Get file using ftp or sftp protocol: $ curl ftp://ftp-your-domain-name/file.tar.gz. You can set the output file name while downloading file with the curl, execute: $ curl -o file.
To download multiple files at the same time, use –O followed by the URL to the file that you wish to download. If you curl without any options except for the URL, the content of the URL (whether it's a webpage, or a binary file, such as an image or a zip file) will be printed out to screen.
The command is:
wget -r -np -l 1 -A zip http://example.com/download/
Options meaning:
-r, --recursive specify recursive download. -np, --no-parent don't ascend to the parent directory. -l, --level=NUMBER maximum recursion depth (inf or 0 for infinite). -A, --accept=LIST comma-separated list of accepted extensions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With