I have a text document that contains a bunch of URLs in this format:
URL = "sitehere.com"
What I'm looking to do is to run curl -K myfile.txt
, and get the output of the response cURL returns, into a file.
How can I do this?
For those of you want to copy the cURL output in the clipboard instead of outputting to a file, you can use pbcopy by using the pipe | after the cURL command. Example: curl https://www.google.com/robots.txt | pbcopy . This will copy all the content from the given URL to your clipboard.
We can save the result of the curl command to a file by using -o/-O options. Now the page gettext. html will be saved in the file named 'mygettext.
Consequentially, the file will be saved in the current working directory. If you want the file saved in a different directory, make sure you change current working directory before you invoke curl with the -O, --remote-name flag! There is no URL decoding done on the file name.
By default, curl doesn't print the response headers. It only prints the response body. To print the response headers, too, use the -i command line argument.
curl -K myconfig.txt -o output.txt
Writes the first output received in the file you specify (overwrites if an old one exists).
curl -K myconfig.txt >> output.txt
Appends all output you receive to the specified file.
Note: The -K is optional.
For a single file you can use -O
instead of -o filename
to use the last segment of the URL path as the filename. Example:
curl http://example.com/folder/big-file.iso -O
will save the results to a new file named big-file.iso in the current folder. In this way it works similar to wget but allows you to specify other curl options that are not available when using wget.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With