I have to run python in a resource constrained environment with only a few GB of virtual memory. Worse yet, I have to fork children from my main process as part of application design, all of which receive a copy-on-write allocation of this same amount of virtual memory on fork. The result is that after forking only 1 - 2 children, the process group hits the ceiling and shuts everything down. Finally, I am not able to remove numpy as a dependency; it is a strict requirement.
Any advice on how I can bring this initial memory allocation down?
e.g.
Details:
Red Hat Enterprise Linux Server release 6.9 (Santiago)
Python 3.6.2
numpy>=1.13.3
Bare Interpreter:
import os
os.system('cat "/proc/{}/status"'.format(os.getpid()))
# ... VmRSS: 7300 kB
# ... VmData: 4348 kB
# ... VmSize: 129160 kB
import numpy
os.system('cat "/proc/{}/status"'.format(os.getpid()))
# ... VmRSS: 21020 kB
# ... VmData: 1003220 kB
# ... VmSize: 1247088 kB
Virtual memory is a process-specific address space, essentially numbers from 0 to 2 64 -1 , where the process can read or write bytes. In a C program you might use APIs like malloc() or mmap() to do so; in Python you just create objects, and the Python interpreter will call malloc() or mmap() when necessary.
The import numpy portion of the code tells Python to bring the NumPy library into your current environment. The as np portion of the code then tells Python to give NumPy the alias of np. This allows you to use NumPy functions by simply typing np.
Thank you, skullgoblet1089, for raising questions on SO and at https://github.com/numpy/numpy/issues/10455 , and for answering. Citing your 2018-01-24 post:
Reducing threads with export OMP_NUM_THREADS=4
will bring down VM allocation.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With