I have a directory with over 100,000 files. I want to know if the string "str1"
exists as part of the content of any of these files.
The command:
grep -l 'str1' *
takes too long as it reads all of the files.
How can I ask grep
to stop reading any further files if it finds a match? Any one-liner?
Note: I have tried grep -l 'str1' * | head
but the command takes just as much time as the previous one.
Naming 100,000 filenames in your command args is going to cause a problem. It probably exceeds the size of a shell command-line.
But you don't have to name all the files if you use the recursive option with just the name of the directory the files are in (which is .
if you want to search files in the current directory):
grep -l -r 'str1' . | head -1
Use grep -m 1
so that grep stops after finding the first match in a file. It is extremely efficient for large text files.
grep -m 1 str1 * /dev/null | head -1
If there is a single file, then /dev/null above ensures that grep does print out the file name in the output.
If you want to stop after finding the first match in any file:
for file in *; do
if grep -q -m 1 str1 "$file"; then
echo "$file"
break
fi
done
The for
loop also saves you from the too many arguments
issue when you have a directory with a large number of files.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With