I have 2 files, the first contains the following:
...
John Allen Smith II 16 555-555-5555 10/24/2010
John Allen Smith II 3 555-555-5555 10/24/2010
John Allen Smith II 17 555-555-5555 10/24/2010
John Doe 16 555-555-5555 10/24/2010
Jane Smith 16 555-555-5555 9/16/2010
Jane Smith 00 555-555-5555 10/24/2010
...
and the second file is a list of names so...
...
John Allen Smith II
John Doe
Jane Smith
...
Is it possible to use awk (or other bash command) to print the lines in the first file that match any name in the second file (the names can repeat in the first file)
Bonus? Is there an easy way to remove those repeated/duplicate lines in the first file?
Thanks very much,
Tomek
awk
#! /bin/bash
awk 'FNR==NR{!a[$0]++;next }{ b[$0]++ }
END{
for(i in a){
for(k in b){
if (a[i]==1 && i ~ k ) { print i }
}
}
}' file1 file2
expanding on codaddict's answer:
grep -f file2 file1 | sort | uniq
this will remove lines that are exactly the same, but the side effect (which may be unwanted) is that your datafile will now be sorted.
It also requires the lines to be exactly the same, which is not the case in your example data. The names are the same, but the data after those same names is different. uniq
can take a field or character count option, but this won't work on your data because your names have variable length and a variable number of fields. If you know your data fields are always the last 3 fields on a line, then you can do this:
grep -f file2 file1 | sort | rev | uniq -f 3 | rev
your output will be only one of each name, but which one? the lowest one lexicographically because it was sorted (sort
is needed for uniq
to work right). If you don't want to sort it first, or need to be careful about which of the lines are dropped, then an awk or perl or ruby or python solution will probably work best using associative arrays.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With