I have several hundred PDFs under a directory in UNIX. The names of the PDFs are really long (approx. 60 chars).
When I try to delete all PDFs together using the following command:
rm -f *.pdf
I get the following error:
/bin/rm: cannot execute [Argument list too long]
What is the solution to this error?
Does this error occur for mv
and cp
commands as well? If yes, how to solve for these commands?
If there are a large number of files in a single directory, Then the traditional rm command can not delete all files and ends with an error message Argument list too long . To resolve this issue and delete all files use xargs command-line utility with the find command.
To remove a directory and all its contents, including any subdirectories and files, use the rm command with the recursive option, -r . Directories that are removed with the rmdir command cannot be recovered, nor can directories and their contents removed with the rm -r command.
The xargs command is used in a UNIX shell to convert input from standard input into arguments to a command. In other words, through the use of xargs the output of a command is used as the input of another command.
The reason this occurs is because bash actually expands the asterisk to every matching file, producing a very long command line.
Try this:
find . -name "*.pdf" -print0 | xargs -0 rm
Warning: this is a recursive search and will find (and delete) files in subdirectories as well. Tack on -f
to the rm command only if you are sure you don't want confirmation.
You can do the following to make the command non-recursive:
find . -maxdepth 1 -name "*.pdf" -print0 | xargs -0 rm
Another option is to use find's -delete
flag:
find . -name "*.pdf" -delete
It's a kernel limitation on the size of the command line argument. Use a for
loop instead.
This is a system issue, related to execve
and ARG_MAX
constant. There is plenty of documentation about that (see man execve, debian's wiki, ARG_MAX details).
Basically, the expansion produce a command (with its parameters) that exceeds the ARG_MAX
limit.
On kernel 2.6.23
, the limit was set at 128 kB
. This constant has been increased and you can get its value by executing:
getconf ARG_MAX
# 2097152 # on 3.5.0-40-generic
for
LoopUse a for
loop as it's recommended on BashFAQ/095 and there is no limit except for RAM/memory space:
Dry run to ascertain it will delete what you expect:
for f in *.pdf; do echo rm "$f"; done
And execute it:
for f in *.pdf; do rm "$f"; done
Also this is a portable approach as glob have strong and consistant behavior among shells (part of POSIX spec).
Note: As noted by several comments, this is indeed slower but more maintainable as it can adapt more complex scenarios, e.g. where one want to do more than just one action.
find
If you insist, you can use find
but really don't use xargs as it "is dangerous (broken, exploitable, etc.) when reading non-NUL-delimited input":
find . -maxdepth 1 -name '*.pdf' -delete
Using -maxdepth 1 ... -delete
instead of -exec rm {} +
allows find
to simply execute the required system calls itself without using an external process, hence faster (thanks to @chepner comment).
find
has a -delete
action:
find . -maxdepth 1 -name '*.pdf' -delete
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With