I am trying to download the file from the following link:
http://www.ncbi.nlm.nih.gov/sviewer/viewer.cgi?tool=portal&sendto=on&log$=seqview&db=nuccore&dopt=gilist&sort=&query_key=1&qty=12654729&filter=all
When pasting the above link in the address bar in a web browser(Chrome), it allows me to save file as "sequence.gi.txt".
But when I try that in a terminal, I get the following error:
curl -o test.txt http://www.ncbi.nlm.nih.gov/sviewer/viewer.cgi?tool=portal&sendto=on&log$=seqview&db=nuccore&dopt=gilist&sort=&query_key=1&qty=12654729&filter=all
[1] 30036
[2] 30037
[3] 30038
[4] 30039
[5] 30040
[6] 30041
[7] 30042
[8] 30043
-bash: log$=seqview: command not found
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
101 7297 0 7297 0 0 59633 0 --:--:-- --:--:-- --:--:-- 79315
[1] Done curl -L -o test.txt http://www.ncbi.nlm.nih.gov/sviewer/viewer.cgi?tool=portal
[2] Done sendto=on
[3] Exit 127 log$=seqview
[4] Done db=nuccore
[5] Done dopt=gilist
[6] Done sort=
[7]- Done query_key=1
[8]+ Done qty=12654729
How do I download the file in the command line?
The &
in the url is telling bash that everything before it is a command that should be run in the background. So everything after each &
is interpretted as a new command to run in the background, which is why you see a bunch of bogus processes get started when you try to run your command. Try putting the url in single quotes 'http://....'
to avoid bash interpretting the $ and & characters as special characters:
curl -o test.txt 'http://www.ncbi.nlm.nih.gov/sviewer/viewer.cgi?tool=portal&sendto=on&log$=seqview&db=nuccore&dopt=gilist&sort=&query_key=1&qty=12654729&filter=all'
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With