The curl command line utility is mainly intended for one-shot file transfers, it doesn't support wildcards per se. A common approach is to generate a list of files using another command, such as ls, grep, dir, etc, and then use curl to upload files from that list.
The curl command transfers data from any server over to your computer. Whereas the wget command downloads the data as a file. This is the major difference between the two commands.
Download multiple files simultaneously Instead of downloading multiple files one by one, you can download all of them simultaneously by running a single command. To download multiple files at the same time, use –O followed by the URL to the file that you wish to download. The above command will download both files.
Answer: Asterisk and question mark are 2 wildcard characters commonly used in searching information.
You can't use wildcards in wget
but the -A
flag should work. From the wget manpage:
You want to download all the gifs from a directory on an http server. You tried
wget http://www.server.com/dir/*.gif
, but that didn't work because http retrieval does not support globbing. In that case, use:wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
Edit: found a related question
Regarding directories:
There's a utility called LFTP
, which has some support for globbing. Take a look at the manpage. There's another question on Linux & Unix that covers its usage in a scenario similar to yours.
If you are able to found a pattern in your query, you can use the bash brace expansion to do this task.
For example, in your case, you may use something like:
wget www.download.example.com/dir/{version,old}/package{00..99}.rpm
Also, you may combine this with the -A
and -R
parameters to filter your results.
Although the above solution kind of works, it fails when you just want to download certain directories, but not all. For example if you have:
http://site.io/like/
http://site.io/like2/
http://site.io/nolike/
Instead put the directory names you want in a text file, e.g.: dirs.txt:
like/
like2/
Then use wget
with the following command options -i dirs.txt -B <base-URL>
like so:
wget -nH -nc -np -r -e robots=off -R "index.html*" -i dirs.txt -B http://site.io/
Since, I don't think you can use directories in the -A
and -R
lists. (?)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With