I want to combine these two command and want to invoke single command
In first command i am storing 4th column of x.csv(Separator ,) file in z.csv file.
awk -F, '{print $4}' x.CSV > z.csv
In second command, i want to find out unique first-column value of z.csv(Separator-space) file.
awk -F\ '{print $1}' z.csv|sort|uniq
I want to combine these two command in single command,How can i do that?
Assuming that the content of z.csv
is actually wanted, rather than just an artefact of the way you're currently implementing your program, then you can use:
awk -F, '{ print $4 > "z.csv"
split($4, f, " ")
f4[f[1]] = 1
}
END { for (i in f4) print i }' x.CSV
The split
function breaks field 4 on spaces, and (associative) array f4
records the key value. The loop at the end prints out the distinct values, unsorted. If you need them sorted, you can either use GNU awk
's built-in sort functions or (if you don't have an awk
with built-in sort functions) write your own in awk
, or pipe the output to sort
.
With GNU awk
, you can replace the END
block with:
END { asorti(f4); for (i in f4) print f4[i] }
If you don't want the z.csv
file, then (a) you could have used a pipe in the first place, and (b) you can simply remove the print $4 > "z.csv"
line.
Pipe the output of the first awk
to the second awk
:
awk -F, '{print $4}' x.CSV | awk -F\ '{print $1}' |sort|uniq
or, as Avinash Raj suggested,
awk -F, '{print $4}' x.CSV | awk -F\ '{print $1}' | sort -u
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With