I have a file with the following structure:
1486113768 3656
1486113768 6280
1486113769 530912
1486113769 5629824
1486113770 5122176
1486113772 3565920
1486113772 530912
1486113773 9229920
1486113774 4020960
1486113774 4547928
My goal is to get rid of duplicate values in the first columns, sum the values in the second columns and update the row with new columns value: a working output, from the input above, would be:
1486113768 9936 # 3656 + 6280
1486113769 6160736 # 530912 + 5629824
1486113770 5122176 # ...
1486113772 4096832
1486113773 9229920
1486113774 8568888
I know cut
, uniq
: until now I managed to find the duplicate values in first columns with:
cut -d " " -f 1 file.log | uniq -d
1486113768
1486113769
1486113772
1486113774
Is there a "awk way" to achieve my goal? I know it is very powerful and terse tool: I used it earlier with
awk '{print $2 " " $3 >> $1".log"}' log.txt
to scan all rows in log.txt and create a .log file with $1 as name, and filling it with $2 and $3 values, all in one bash line (to hell with read
loop!); is there a way to find first column duplicates, sum its second column values and rewrite the row removing the duplicates and printing the resulting sum of second column?
Use an Awk
as below,
awk '{ seen[$1] += $2 } END { for (i in seen) print i, seen[i] }' file1
1486113768 9936
1486113769 6160736
1486113770 5122176
1486113772 4096832
1486113773 9229920
1486113774 8568888
{seen[$1]+=$2}
creates a hash-map with the $1
being treated as the index value and the sum is incremented only for those unique items from $1
in the file.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With