I have to pass 256Kb of text as an argument to the "aws sqs" command but am running into a limit in the command-line at around 140Kb. This has been discussed in many places that it been solved in the Linux kernel as of 2.6.23 kernel.
But cannot get it to work. I am using 3.14.48-33.39.amzn1.x86_64
Here's a simple example to test:
#!/bin/bash
SIZE=1000
while [ $SIZE -lt 300000 ]
do
echo "$SIZE"
VAR="`head -c $SIZE < /dev/zero | tr '\0' 'a'`"
./foo "$VAR"
let SIZE="( $SIZE * 20 ) / 19"
done
And the foo
script is just:
#!/bin/bash
echo -n "$1" | wc -c
And the output for me is:
117037
123196
123196
129680
129680
136505
./testCL: line 11: ./foo: Argument list too long
143689
./testCL: line 11: ./foo: Argument list too long
151251
./testCL: line 11: ./foo: Argument list too long
159211
So, the question how do I modify the testCL
script is it can pass 256Kb of data? Btw, I have tried adding ulimit -s 65536
to the script and it didn't help.
And if this is plain impossible I can deal with that but can you shed light on this quote from my link above
"While Linux is not Plan 9, in 2.6.23 Linux is adding variable argument length. Theoretically you shouldn't hit frequently "argument list too long" errors again, but this patch also limits the maximum argument length to 25% of the maximum stack limit (ulimit -s)."
edit:
I was finally able to pass <= 256 KB as a single command line argument (see edit (4) in the bottom). However, please read carefully how I did it and decide for yourself if this is a way you want to go. At least you should be able to understand why you are 'stuck' otherwise from what I found out.
With the coupling of ARG_MAX
to ulim -s
/ 4 came the introduction of MAX_ARG_STRLEN
as max. length of an argument:
/*
* linux/fs/exec.c
*
* Copyright (C) 1991, 1992 Linus Torvalds
*/
...
#ifdef CONFIG_MMU
/*
* The nascent bprm->mm is not visible until exec_mmap() but it can
* use a lot of memory, account these pages in current->mm temporary
* for oom_badness()->get_mm_rss(). Once exec succeeds or fails, we
* change the counter back via acct_arg_size(0).
*/
...
static bool valid_arg_len(struct linux_binprm *bprm, long len)
{
return len <= MAX_ARG_STRLEN;
}
...
#else
...
static bool valid_arg_len(struct linux_binprm *bprm, long len)
{
return len <= bprm->p;
}
#endif /* CONFIG_MMU */
...
static int copy_strings(int argc, struct user_arg_ptr argv,
struct linux_binprm *bprm)
{
...
str = get_user_arg_ptr(argv, argc);
...
len = strnlen_user(str, MAX_ARG_STRLEN);
if (!len)
goto out;
ret = -E2BIG;
if (!valid_arg_len(bprm, len))
goto out;
...
}
...
MAX_ARG_STRLEN
is defined as 32 times the page size in linux/include/uapi/linux/binfmts.h
:
...
/*
* These are the maximum length and maximum number of strings passed to the
* execve() system call. MAX_ARG_STRLEN is essentially random but serves to
* prevent the kernel from being unduly impacted by misaddressed pointers.
* MAX_ARG_STRINGS is chosen to fit in a signed 32-bit integer.
*/
#define MAX_ARG_STRLEN (PAGE_SIZE * 32)
#define MAX_ARG_STRINGS 0x7FFFFFFF
...
The default page size is 4 KB so you cannot pass arguments longer than 128 KB.
I can't try it now but maybe switching to huge page mode (page size 4 MB) if possible on your system solves this problem.
For more detailed information and references see this answer to a similar question on Unix & Linux SE.
edits:
(1)
According to this answer one can change the page size of x86_64
Linux to 1 MB by enabling CONFIG_TRANSPARENT_HUGEPAGE
and setting CONFIG_TRANSPARENT_HUGEPAGE_MADVISE
to n
in the kernel config.
(2)
After recompiling my kernel with the above configuration changes getconf PAGESIZE
still returns 4096.
According to this answer CONFIG_HUGETLB_PAGE
is also needed which I could pull in via CONFIG_HUGETLBFS
. I am recompiling now and will test again.
(3)
I recompiled my kernel with CONFIG_HUGETLBFS
enabled and now /proc/meminfo
contains the corresponding HugePages_*
entries mentioned in the corresponding section of the kernel documentation.
However, the page size according to getconf PAGESIZE
is still unchanged. So while I should be able now to request huge pages via mmap
calls, the kernel's default page size determining MAX_ARG_STRLEN
is still fixed at 4 KB.
(4)
I modified linux/include/uapi/linux/binfmts.h
to #define MAX_ARG_STRLEN (PAGE_SIZE * 64)
, recompiled my kernel and now your code produces:
...
117037
123196
123196
129680
129680
136505
143689
151251
159211
...
227982
227982
239981
239981
252611
252611
265906
./testCL: line 11: ./foo: Argument list too long
279901
./testCL: line 11: ./foo: Argument list too long
294632
./testCL: line 11: ./foo: Argument list too long
So now the limit moved from 128 KB to 256 KB as expected. I don't know about potential side effects though. As far as I can tell, my system seems to run just fine.
Just put the arguments into some file, and modify your program to accept "arguments" from a file. A common convention (notably used by GCC and several other GNU programs) is that an argument like @/tmp/arglist.txt
asks your program to read arguments from file /tmp/arglist.txt
, often one line per argument
You might perhaps pass some data thru long environment variables, but they also are limited (and what is limited by the kernel in fact is the size of main
's initial stack, containing both program arguments and the environment)
Alternatively, modify your program to be configurable thru some configuration file which would contain the information you want to pass thru arguments.
(If you can recompile your kernel, you might try to increase -to a bigger power of two much smaller than your available RAM, e.g. to 2097152- the ARG_MAX
which is #define
-d in linux-4.*/include/uapi/linux/limits.h
before recompiling your kernel)
In other ways, there is no way to circumvent that limitation (see execve(2) man page and its Limits on size of arguments and environment section) - once you have raised your stack limit (using setrlimit(2) with RLIMIT_STACK
, generally with ulimit
builtin in the parent shell). You need to deal with it otherwise.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With