Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pipe output of cat to cURL to download a list of files

Tags:

unix

curl

I have a list URLs in a file called urls.txt. Each line contains 1 URL. I want to download all of the files at once using cURL. I can't seem to get the right one-liner down.

I tried:

$ cat urls.txt | xargs -0 curl -O 

But that only gives me the last file in the list.

like image 995
Finch Avatar asked Mar 26 '12 02:03

Finch


People also ask

How do you download multiple files on curl?

Download multiple files simultaneously Instead of downloading multiple files one by one, you can download all of them simultaneously by running a single command. To download multiple files at the same time, use –O followed by the URL to the file that you wish to download. The above command will download both files.

Can I download files using curl?

curl lets you quickly download files from a remote system. curl supports many different protocols and can also make more complex web requests, including interacting with remote APIs to send and receive data.

How do I write a curl output to a file?

For those of you want to copy the cURL output in the clipboard instead of outputting to a file, you can use pbcopy by using the pipe | after the cURL command. Example: curl https://www.google.com/robots.txt | pbcopy . This will copy all the content from the given URL to your clipboard.

How do I download files using curl Windows?

To download a file with Curl, use the --output or -o command-line option. This option allows you to save the downloaded file to a local drive under the specified name. If you want the uploaded file to be saved under the same name as in the URL, use the --remote-name or -O command line option.


2 Answers

This works for me:

$ xargs -n 1 curl -O < urls.txt 

I'm in FreeBSD. Your xargs may work differently.

Note that this runs sequential curls, which you may view as unnecessarily heavy. If you'd like to save some of that overhead, the following may work in bash:

$ mapfile -t urls < urls.txt $ curl ${urls[@]/#/-O } 

This saves your URL list to an array, then expands the array with options to curl to cause targets to be downloaded. The curl command can take multiple URLs and fetch all of them, recycling the existing connection (HTTP/1.1), but it needs the -O option before each one in order to download and save each target. Note that characters within some URLs ] may need to be escaped to avoid interacting with your shell.

Or if you are using a POSIX shell rather than bash:

$ curl $(printf ' -O %s' $(cat urls.txt)) 

This relies on printf's behaviour of repeating the format pattern to exhaust the list of data arguments; not all stand-alone printfs will do this.

Note that this non-xargs method also may bump up against system limits for very large lists of URLs. Research ARG_MAX and MAX_ARG_STRLEN if this is a concern.

like image 152
ghoti Avatar answered Sep 21 '22 13:09

ghoti


A very simple solution would be the following: If you have a file 'file.txt' like

url="http://www.google.de" url="http://www.yahoo.de" url="http://www.bing.de" 

Then you can use curl and simply do

curl -K file.txt 

And curl will call all Urls contained in your file.txt!

So if you have control over your input-file-format, maybe this is the simplest solution for you!

like image 45
Dirk Avatar answered Sep 20 '22 13:09

Dirk