Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does "uniq" count identical words as different?

I want to calculate the frequency of the words from a file, where the words are one by line. The file is really big, so this might be the problem (it counts 300k lines in this example).

I do this command:

cat .temp_occ | uniq -c | sort -k1,1nr -k2 > distribution.txt

and the problem is that it gives me a little bug: it considers the same words as different.

For example, the first entries are:

306 continua 
278 apertura 
211 eventi 
189 murah 
182 giochi 
167 giochi 

with giochi repeated twice as you can see.

At the bottom of the file it becomes even worse and it looks like this:

  1 win 
  1 win 
  1 win 
  1 win 
  1 win 
  1 win 
  1 win 
  1 win 
  1 win 
  1 winchester 
  1 wind 
  1 wind 

for all the words.

What am I doing wrong?

like image 298
Epi Avatar asked Aug 08 '12 08:08

Epi


3 Answers

Try to sort first:

cat .temp_occ | sort| uniq -c | sort -k1,1nr -k2 > distribution.txt
like image 169
kofemann Avatar answered Oct 26 '22 13:10

kofemann


Or use "sort -u" which also eliminates duplicates. See here.

like image 21
rollstuhlfahrer Avatar answered Oct 26 '22 15:10

rollstuhlfahrer


The size of the file has nothing to do with what you're seeing. From the man page of uniq(1):

Note: 'uniq' does not detect repeated lines unless they are adjacent. You may want to sort the input first, or use 'sort -u' without 'uniq'. Also, comparisons honor the rules specified by 'LC_COLLATE'.`

So running uniq on

a
b
a

will return:

a
b
a
like image 43
DJohnson Avatar answered Oct 26 '22 13:10

DJohnson