Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do you handle the "Too many files" problem when working in Bash?

Tags:

bash

shell

unix

I many times have to work with directories containing hundreds of thousands of files, doing text matching, replacing and so on. If I go the standard route of, say

grep foo *

I get the too many files error message, so I end up doing

for i in *; do grep foo $i; done

or

find ../path/ | xargs -I{} grep foo "{}"

But these are less than optimal (create a new grep process per each file).

This looks like more of a limitation in the size of the arguments programs can receive, because the * in the for loop works alright. But, in any case, what's the proper way to handle this?

PS: Don't tell me to do grep -r instead, I know about that, I'm thinking about tools that do not have a recursive option.

like image 600
Vinko Vrsalovic Avatar asked Oct 09 '08 06:10

Vinko Vrsalovic


2 Answers

In newer versions of findutils, find can do the work of xargs (including the glomming behavior, such that only as many grep processes as needed are used):

find ../path -exec grep foo '{}' +

The use of + rather than ; as the last argument triggers this behavior.

like image 104
Charles Duffy Avatar answered Sep 27 '22 23:09

Charles Duffy


If there is a risk of filenames containing spaces, you should remember to use the -print0 flag to find together with the -0 flag to xargs:

find . -print0 | xargs -0 grep -H foo
like image 26
JesperE Avatar answered Sep 27 '22 22:09

JesperE