Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Remove redundant strings without looping

Is there a way to remove both duplicates and redundant substrings from a list, using shell tools? By "redundant", I mean a string that is contained within another string, so "foo" is redundant with "foobar" and "barfoo". For example, take this list:

abcd
abc
abd
abcd
bcd

and return:

abcd
abd

uniq, sort -u and awk '!seen[$0]++' remove duplicates effectively but not redundant strings: How to delete duplicate lines in a file without sorting it in Unix? Remove duplicate lines without sorting

I can loop through each line recursively with grep but this is is quite slow for large files. (I have about 10^8 lines to process.) There's an approach using a loop in Python here: Remove redundant strings based on partial strings and Bash here: How to check if a string contains a substring in Bash but I'm trying to avoid loops. Edit: I mean nested loops here, thanks for the clarification @shellter

Is there a way to use a awk's match() function with an array index? This approach builds the array progressively so never has to search the whole file, so should be faster for large files. Or am I missing some other simple solution?

An ideal solution would allow matching of a specified column, as for the methods above.

EDIT

Both of the answers below work, thanks very much for the help. Currently testing for performance on a real dataset, will update with results and accept an answer. I tested both approaches on the same input file, which has 430,000 lines, of which 417,000 are non-redundant. For reference, my original looped grep approach took 7h30m with this file.
Update:
James Brown's original solution took 3h15m and Ed Morton's took 8h59m. On a smaller dataset, James's updated version was 7m versus the original's 20m. Thank you both, this is really helpful.

The data I'm working with are around 110 characters per string, with typically hundreds of thousands of lines per file. The way in which these strings (which are antibody protein sequences) are created can lead to characters from one or both ends of the string getting lost. Hence, "bcd" is likely to be a fragment of "abcde".

like image 570
garethmorgan Avatar asked Oct 08 '20 17:10

garethmorgan


2 Answers

An awk that on first run extracts and stores all substrings and strings to two arrays subs and strs and checks on second run:

$ awk '
NR==FNR {                                    # first run 
    if(($0 in strs)||($0 in subs))           # process only unseen strings
        next
    len=length()-1                           # initial substring length
    strs[$0]                                 # hash the complete strings
    while(len>=1) {                          
        for(i=1;i+len-1<=length();i++) {     # get all substrings of current len
            asub=substr($0,i,len)            # sub was already resetved :(
            if(asub in strs)                 # if substring is in strs
                delete strs[asub]            # we  do not want it there
            subs[asub]                       # hash all substrings too
        }
        len--                                
    }
    next
}
($0 in strs)&&++strs[$0]==1' file file

Output:

abcd
abd

I tested the script with about 30 M records of 1-20 char ACGT strings. The script ran 3m27s and used about 20 % of my 16 GBs. Using strings of length 1-100 I OOM'd in a few mins (tried it again with about 400k records oflength of 50-100 and it uses about 200 GBs and runs about an hour). (20 M records of 1-30 chars ran 7m10s and used 80 % of the mem)

So if your data records are short or you have unlimited memory, my solution is fast but in the opposite case it's going to crash running out of memory.

Edit:

Another version that tries to preserve memory. On the first go it checks the min and max lengths of strings and on the second run won't store substrings shorter than global min. For about 400 k record of length 50-100 it used around 40 GBs and ran 7 mins. My random data didn't have any redundancy so input==putput. It did remove redundance with other datasets (2 M records of 1-20 char strings):

$ awk '
BEGIN {
    while((getline < ARGV[1])>0)            # 1st run, check min and max lenghts
        if(length()<min||min=="")           # TODO: test for length()>0, too
            min=length()
        else if(length()>max||max=="")
            max=length()
#       print min,max > "/dev/stderr"       # debug   
        close(ARGV[1])

    while((getline < ARGV[1])>0) {          # 2nd run, hash strings and substrings
#       if(++nr%10000==0)                   # debug
#           print nr > "/dev/stderr"        # debug
        if(($0 in strs)||($0 in subs))
            continue
        len=length()-1
        strs[$0]
        while(len>=min) {
            for(i=1;i+len-1<=length();i++) {
                asub=substr($0,i,len)
                if(asub in strs)
                    delete strs[asub]
                subs[asub]
            }
            len--
        }
    }
    close(ARGV[1])

    while((getline < ARGV[1])>0)             # 3rd run, output 
        if(($0 in strs)&&!strs[$0]++)
            print
}' file
like image 85
James Brown Avatar answered Sep 27 '22 20:09

James Brown


$ awk '{print length($0), $0}' file |
    sort -k1,1rn -k2 -u |
    awk '!index(str,$2){str = str FS $2; print $2}'
abcd
abd

The above assumes the set of unique values will fit in memory.

like image 29
Ed Morton Avatar answered Sep 27 '22 21:09

Ed Morton