As a general rule (i.e. in vanilla kernels), fork
/clone
failures with ENOMEM
occur specifically because of either an honest to God out-of-memory condition (dup_mm
, dup_task_struct
, alloc_pid
, mpol_dup
, mm_init
etc. croak), or because security_vm_enough_memory_mm
failed you while enforcing the overcommit policy.
Start by checking the vmsize of the process that failed to fork, at the time of the fork attempt, and then compare to the amount of free memory (physical and swap) as it relates to the overcommit policy (plug the numbers in.)
In your particular case, note that Virtuozzo has additional checks in overcommit enforcement. Moreover, I'm not sure how much control you truly have, from within your container, over swap and overcommit configuration (in order to influence the outcome of the enforcement.)
Now, in order to actually move forward I'd say you're left with two options:
NOTE that the coding effort may be all for naught if it turns out that it's not you, but some other guy collocated in a different instance on the same server as you running amock.
Memory-wise, we already know that subprocess.Popen
uses fork
/clone
under the hood, meaning that every time you call it you're requesting once more as much memory as Python is already eating up, i.e. in the hundreds of additional MB, all in order to then exec
a puny 10kB executable such as free
or ps
. In the case of an unfavourable overcommit policy, you'll soon see ENOMEM
.
Alternatives to fork
that do not have this parent page tables etc. copy problem are vfork
and posix_spawn
. But if you do not feel like rewriting chunks of subprocess.Popen
in terms of vfork
/posix_spawn
, consider using suprocess.Popen
only once, at the beginning of your script (when Python's memory footprint is minimal), to spawn a shell script that then runs free
/ps
/sleep
and whatever else in a loop parallel to your script; poll the script's output or read it synchronously, possibly from a separate thread if you have other stuff to take care of asynchronously -- do your data crunching in Python but leave the forking to the subordinate process.
HOWEVER, in your particular case you can skip invoking ps
and free
altogether; that information is readily available to you in Python directly from procfs
, whether you choose to access it yourself or via existing libraries and/or packages. If ps
and free
were the only utilities you were running, then you can do away with subprocess.Popen
completely.
Finally, whatever you do as far as subprocess.Popen
is concerned, if your script leaks memory you will still hit the wall eventually. Keep an eye on it, and check for memory leaks.
Looking at the output of free -m
it seems to me that you actually do not have swap memory available. I am not sure if in Linux the swap always will be available automatically on demand, but I was having the same problem and none of the answers here really helped me. Adding some swap memory however, fixed the problem in my case so since this might help other people facing the same problem, I post my answer on how to add a 1GB swap (on Ubuntu 12.04 but it should work similarly for other distributions.)
You can first check if there is any swap memory enabled.
$sudo swapon -s
if it is empty, it means you don't have any swap enabled. To add a 1GB swap:
$sudo dd if=/dev/zero of=/swapfile bs=1024 count=1024k
$sudo mkswap /swapfile
$sudo swapon /swapfile
Add the following line to the fstab
to make the swap permanent.
$sudo vim /etc/fstab
/swapfile none swap sw 0 0
Source and more information can be found here.
For an easy fix, you could
echo 1 > /proc/sys/vm/overcommit_memory
if you're sure that your system has enough memory. See Linux over commit heuristic.
swap may not be the red herring previously suggested. How big is the python process in question just before the ENOMEM
?
Under kernel 2.6, /proc/sys/vm/swappiness
controls how aggressively the kernel will turn to swap, and overcommit*
files how much and how precisely the kernel may apportion memory with a wink and a nod. Like your facebook relationship status, it's complicated.
...but swap is actually available on demand (according to the web host)...
but not according to the output of your free(1)
command, which shows no swap space recognized by your server instance. Now, your web host may certainly know much more than I about this topic, but virtual RHEL/CentOS systems I've used have reported swap available to the guest OS.
Adapting Red Hat KB Article 15252:
A Red Hat Enterprise Linux 5 system will run just fine with no swap space at all as long as the sum of anonymous memory and system V shared memory is less than about 3/4 the amount of RAM. .... Systems with 4GB of ram or less [are recommended to have] a minimum of 2GB of swap space.
Compare your /proc/sys/vm
settings to a plain CentOS 5.3 installation. Add a swap file. Ratchet down swappiness
and see if you live any longer.
I continue to suspect that your customer/user has some kernel module or driver loaded which
is interfering with the clone()
system call (perhaps some obscure security enhancement,
something like LIDS but more obscure?) or is somehow filling up some of the kernel data
structures that are necessary for fork()
/clone()
to operate (process table, page
tables, file descriptor tables, etc).
Here's the relevant portion of the fork(2)
man page:
ERRORS EAGAIN fork() cannot allocate sufficient memory to copy the parent's page tables and allocate a task structure for the child. EAGAIN It was not possible to create a new process because the caller's RLIMIT_NPROC resource limit was encountered. To exceed this limit, the process must have either the CAP_SYS_ADMIN or the CAP_SYS_RESOURCE capability. ENOMEM fork() failed to allocate the necessary kernel structures because memory is tight.
I suggest having the user try this after booting into a stock, generic kernel and with only a minimal set of modules and drivers loaded (minimum necessary to run your application/script). From there, assuming it works in that configuration, they can perform a binary search between that and the configuration which exhibits the issue. This is standard sysadmin troubleshooting 101.
The relevant line in your strace
is:
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb7f12708) = -1 ENOMEM (Cannot allocate memory)
... I know others have talked about swap and memory availability (and I would recommend that you set up at least a small swap partition, ironically even if it's on a RAM disk ... the code paths through the Linux kernel when it has even a tiny bit of swap available have been exercised far more extensively than those (exception handling paths) in which there is zero swap available.
However I suspect that this is still a red herring.
The fact that free
is reporting 0 (ZERO) memory in use by the cache and buffers is very disturbing. I suspect that the free
output ... and possibly your application issue here, are caused by some proprietary kernel module which is interfering with the memory allocation in some way.
According to the man pages for fork()/clone() the fork() system call should return EAGAIN if your call would cause a resource limit violation (RLIMIT_NPROC) ... however, it doesn't say if EAGAIN is to be returned by other RLIMIT* violations. In any event if your target/host has some sort of weird Vormetric or other security settings (or even if your process is running under some weird SELinux policy) then it might be causing this -ENOMEM failure.
It's pretty unlikely to be a normal run-of-the-mill Linux/UNIX issue. You've got something non-standard going on there.
Have you tried using:
(status,output) = commands.getstatusoutput("ps aux")
I thought this had fixed the exact same problem for me. But then my process ended up getting killed instead of failing to spawn, which is even worse..
After some testing I found that this only occurred on older versions of python: it happens with 2.6.5 but not with 2.7.2
My search had led me here python-close_fds-issue, but unsetting closed_fds had not solved the issue. It is still well worth a read.
I found that python was leaking file descriptors by just keeping an eye on it:
watch "ls /proc/$PYTHONPID/fd | wc -l"
Like you, I do want to capture the command's output, and I do want to avoid OOM errors... but it looks like the only way is for people to use a less buggy version of Python. Not ideal...
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With