Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Using grep and awk together

Tags:

grep

bash

awk

I have a file (A.txt) with 4 columns of numbers and another file with 3 columns of numbers (B.txt). I need to solve the following problems:

  1. Find all lines in A.txt whose 3rd column has a number that appears any where in the 3rd column of B.txt.

  2. Assume that I have many files like A.txt in a directory and I need to run this for every file in that directory.

How do I do this?

like image 852
duli Avatar asked Apr 04 '14 14:04

duli


People also ask

Can we use awk and grep together?

Re: You should never see someone using grep and awk together... I've got a series of syslog files in /var/log (some compressed). I need to match against a string voltage as a flag that further processing is required, but this string isn't always in the same field.

Can we use grep and sed together?

This collection of sed and grep use cases might help you better understand how these commands can be used in Linux. Tools like sed (stream editor) and grep (global regular expression print) are powerful ways to save time and make your work faster.

Can I use awk and sed together?

Combining the Twoawk and sed are both incredibly powerful when combined. You can do this by using Unix pipes. Those are the "|" bits between commands.

What is awk in grep?

Grep command in used for finding particular patterns in files and outputs all the result containing the search pattern. Awk on the other hand is also used for searching a file for certain patterns but goes ahead to perform a certain task on pattern match.


2 Answers

You should never see someone using grep and awk together because whatever grep can do, you can also do in awk:

Grep and Awk

grep "foo" file.txt | awk '{print $1}'

Using Only Awk:

awk '/foo/ {print $1}' file.txt

I had to get that off my chest. Now to your problem...

Awk is a programming language that assumes a single loop through all the lines in a set of files. And, you don't want to do this. Instead, you want to treat B.txt as a special file and loop though your other files. That normally calls for something like Python or Perl. (Older versions of BASH didn't handle hashed key arrays, so these versions of BASH won't work.) However, slitvinov looks like he found an answer.

Here's a Perl solution anyway:

use strict;
use warnings;
use feature qw(say);
use autodie;

my $b_file = shift;
open my $b_fh, "<", $b_file;

#
# This tracks the values in "B"
#
my %valid_lines;
while ( my $line = <$b_file> ) {
    chomp $line;
    my @array = split /\s+/, $line;
    $valid_lines{$array[2]} = 1;   #Third column
}
close $b_file;

#
# This handles the rest of the files
#
while ( my $line = <> ) {  # The rest of the files
   chomp $line;
   my @array = split /\s+/, $line;
   next unless exists $valid_lines{$array[2]};  # Next unless field #3 was in b.txt too
   say $line;
}
like image 95
David W. Avatar answered Nov 02 '22 04:11

David W.


Here is an example. Create the following files and run

awk -f c.awk B.txt A*.txt 

c.awk

FNR==NR {
    s[$3]
    next
}

$3 in s {
    print FILENAME, $0
}

A1.txt

1 2 3
1 2 6
1 2 5

A2.txt

1 2 3
1 2 6
1 2 5

B.txt

1 2 3
1 2 5
2 1 8

The output should be:

A1.txt 1 2 3
A1.txt 1 2 5
A2.txt 1 2 3
A2.txt 1 2 5
like image 44
slitvinov Avatar answered Nov 02 '22 04:11

slitvinov