I have browsed through many posts on Stack Overflow as well as a few related communities regarding the argument list too long
topic and I don't seem to clearly figure if the length restriction applies to shell builtins or not.
Let's say I want to pass a very long string to a command through standard input:
string="a very long list of words ..."
Can I say:
# not using double quotes around $string is deliberate
printf '%s\n' $string | cmd ...
or
cmd <<< $string
Or even pipe it to xargs
:
printf '%s\n' $string | xargs cmd ...
Can someone please clarify this?
The Solution There are several solutions to this problem (bash: /usr/bin/rm: Argument list too long). Remove the folder itself, then recreate it. If you still need that directory, then recreate it with the mkdir command.
So if your receiving application reads lines of input from its stdin and its stdin is a terminal device and that application doesn't implement its own line editor (like bash does) and doesn't change the input mode, you won't be able to enter lines longer than 4096 bytes (including the terminating newline character).
In bash, the OS-enforced limitation on command-line length which causes the error argument list too long
is not applied to shell builtins.
This error is triggered when the execve()
syscall returns the error code E2BIG
. There is no execve()
call involved when invoking a builtin, so the error cannot take place.
Thus, both of your proposed operations are safe: cmd <<< "$string"
writes $string
to a temporary file, which does not require that it be passed as an argv element (or an environment variable, which is stored in the same pool of reserved space); and printf '%s\n' "$cmd"
takes place internal to the shell unless the shell's configuration has been modified, as with enable -n printf
, to use an external printf
implementation.
I don't seem to figure if the length restriction applies to shell builtins or not.
Probably not, but you should check the source code of your particular version of bash
(since it is free software). However, there obviously is some -hopefully larger- limitation (in particular because some malloc
done inside bash
could fail), but then you'll get another error message or behavior.
AFAIK, the argument list too long error is given by execve(2) failing with E2BIG
, and builtin functions of bash don't fork
then execve
(like command invoking external programs do).
In practice, E2BIG
might appear with a few hundred thousands bytes (the exact limit depends upon the kernel and system) but I guess that builtins could be used on several dozens of megabytes (on today's desktops). But YMMV (since you could use ulimit
to have your shell doing some setrlimit(2)...). I won't recommend handling gigabytes of data thru shell builtins.
BTW, xargs(1) can be helpful, and you could even raise the limit (for E2BIG
) by recompiling your kernel (and also thru some other ways, in recent kernels). A few years ago that was a strong motivation for me to recompile kernels.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With