How can I parse CSV files on the Linux command line?
To do things like:
csvparse -c 2,5,6 filename
to extract fields from columns 2, 5 and 6 from all rows.
It should be able to handle the csv file format: https://www.rfc-editor.org/rfc/rfc4180 which means quoting fields and escaping inner quotes as appropriate, so for an example row with 3 fields:
field1,"field, number ""2"", has inner quotes and a comma",field3
so that if I request field 2 for the row above I get:
field, number "2", has inner quotes and a comma
I appreciate that there are numerous solutions, Perl, Awk (etc.) to this problem but I would like a native bash command line tool that does not require me to invoke some other scripting environment or write any additional code(!).
Each line of the file is a data record. You can use while shell loop to read comma-separated cvs file. IFS variable will set cvs separated to , (comma). The read command will read each line and store data into each field.
To split large CSV (Comma-Separated Values) file into smaller files in Linux/Ubuntu use the split command and required arguments. split -d -l 10000 source. csv tempfile.
So just in case you are using a text only browser, or need to cut and paste, Lukas's answer is to go into your command line, navigate to the right directory and type the command: “cut -d, -f3 report_source. csv | sed 's/”//g > report_links. txt” (without the quotes).
csvtool is really good. Available in Debian / Ubuntu (apt-get install csvtool
). Example:
csvtool namedcol Account,Cost input.csv > output.csv
See the CSVTool manual page for usage tips.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With