Given a file with data like this (i.e. stores.dat file)
sid|storeNo|latitude|longitude 2|1|-28.03720000|153.42921670 9|2|-33.85090000|151.03274200
What would be a command to output the number of column names?
i.e. In the example above it would be 4. (number of pipe characters + 1 in the first line)
I was thinking something like:
awk '{ FS = "|" } ; { print NF}' stores.dat
but it returns all lines instead of just the first and for the first line it returns 1 instead of 4
Method 1: Using head + sed + wc Commands We will again pipe this output to the wc command which will count these comma characters plus the carriage (\n) character as the total number of columns. We have successfully determined the number of columns on our CSV file as 7.
Use head -n 1 for lowest column count, tail -n 1 for highest column count. Rows: cat file | wc -l or wc -l < file for the UUOC crowd. Show activity on this post. Alternatively to count columns, count the separators between columns.
awk with NF (number of fields) variable. NF is a built-in variable of awk command which is used to count the total number of fields in each line of the input text.
To count the number of records (or rows) in several CSV files the wc can used in conjunction with pipes. In the following example there are five CSV files. The requirement is to find out the sum of records in all five files. This can be achieved by piping the output of the cat command to wc.
awk -F'|' '{print NF; exit}' stores.dat
Just quit right after the first line.
This is a workaround (for me: I don't use awk very often):
Display the first row of the file containing the data, replace all pipes with newlines and then count the lines:
$ head -1 stores.dat | tr '|' '\n' | wc -l
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With