I have 2 files. First file contains the list of row ID's of tuples of a table in the database. And second file contains SQL queries with these row ID's in "where" clause of the query.
For example:
File 1
1610657303 1610658464 1610659169 1610668135 1610668350 1610670407 1610671066
File 2
update TABLE_X set ATTRIBUTE_A=87 where ri=1610668350; update TABLE_X set ATTRIBUTE_A=87 where ri=1610672154; update TABLE_X set ATTRIBUTE_A=87 where ri=1610668135; update TABLE_X set ATTRIBUTE_A=87 where ri=1610672153;
I have to read File 1 and search in File 2 for all the SQL commands which matches the row ID's from File 1 and dump those SQL queries in a third file.
File 1 has 1,00,000 entries and File 2 contains 10 times the entries of File 1 i.e. 1,00,0000.
I used grep -f File_1 File_2 > File_3
. But this is extremely slow and the rate is 1000 entries per hour.
Is there any faster way to do this?
The Linux cp command is used for copying files and directories to another location. To copy a file, specify “cp” followed by the name of a file to copy.
To copy files and directories use the cp command under a Linux, UNIX-like, and BSD like operating systems. cp is the command entered in a Unix and Linux shell to copy a file from one place to another, possibly on a different filesystem.
To find a pattern that is more than one word long, enclose the string with single or double quotation marks. The grep command can search for a string in groups of files. When it finds a pattern that matches in more than one file, it prints the name of the file, followed by a colon, then the line matching the pattern.
You don't need regexps, so grep -F -f file1 file2
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With