Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Finding a uniq -c substitute for big files

I have a large file (50 GB) and I could like to count the number of occurrences of different lines in it. Normally I'd use

sort bigfile | uniq -c

but the file is large enough that sorting takes a prohibitive amount of time and memory. I could do

grep -cfx 'one possible line'

for each unique line in the file, but this would mean n passes over the file for each possible line, which (although much more memory friendly) takes even longer than the original.

Any ideas?


A related question asks about a way to find unique lines in a big file, but I'm looking for a way to count the number of instances of each -- I already know what the possible lines are.

like image 956
Charles Avatar asked Sep 02 '15 22:09

Charles


2 Answers

Use awk

awk '{c[$0]++} END {for (line in c) print c[line], line}' bigfile.txt

This is O(n) in time, and O(unique lines) in space.

like image 98
Barmar Avatar answered Oct 05 '22 19:10

Barmar


Here is a solution using jq 1.5. It is essentially the same as the awk solution, both in approach and performance characteristics, but the output is a JSON object representing the hash. (The program can be trivially modified to produce output in an alternative format.)

Invocation:

$ jq -nR 'reduce inputs as $line ({}; .[$line] += 1)' bigfile.txt

If bigfile.txt consisted of these lines:

a
a
b
a
c

then the output would be:

{
  "a": 3,
  "b": 1,
  "c": 1
}
like image 44
peak Avatar answered Oct 05 '22 17:10

peak