I sometimes write Python programs which are very difficult to determine how much memory it will use before execution. As such, I sometimes invoke a Python program that tries to allocate massive amounts of RAM causing the kernel to heavily swap and degrade the performance of other running processes.
Because of this, I wish to restrict how much memory a Python heap can grow. When the limit is reached, the program can simply crash. What's the best way to do this?
If it matters, much code is written in Cython, so it should take into account memory allocated there. I am not married to a pure Python solution (it does not need to be portable), so anything that works on Linux is fine.
Check out resource.setrlimit(). It only works on Unix systems but it seems like it might be what you're looking for, as you can choose a maximum heap size for your process and your process's children with the resource.RLIMIT_DATA parameter.
EDIT: Adding an example:
import resource
rsrc = resource.RLIMIT_DATA
soft, hard = resource.getrlimit(rsrc)
print 'Soft limit starts as :', soft
resource.setrlimit(rsrc, (1024, hard)) #limit to one kilobyte
soft, hard = resource.getrlimit(rsrc)
print 'Soft limit changed to :', soft
I'm not sure what your use case is exactly but it's possible you need to place a limit on the size of the stack instead with resouce.RLIMIT_STACK. Going past this limit will send a SIGSEGV signal to your process, and to handle it you will need to employ an alternate signal stack as described in the setrlimit Linux manpage. I'm not sure if sigaltstack is implemented in python, though, so that could prove difficult if you want to recover from going over this boundary.
Have a look at ulimit. It allows resource quotas to be set. May need appropriate kernel settings as well.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With