I wrote the following code:
from hurry.size import size
from pysize import get_zise
import os
import psutil
def load_objects():
process = psutil.Process(os.getpid())
print "start method"
process = psutil.Process(os.getpid())
print "process consumes " + size(process.memory_info().rss)
objects = make_a_call()
print "total size of objects is " + (get_size(objects))
print "process consumes " + size(process.memory_info().rss)
print "exit method"
def main():
process = psutil.Process(os.getpid())
print "process consumes " + size(process.memory_info().rss)
load_objects()
print "process consumes " + size(process.memory_info().rss)
get_size()
returns the memory consumption of the objects using this code.
I get the following prints:
process consumes 21M
start method
total size of objects is 20M
process consumes 29M
exit method
process consumes 29M
When you create a list object, the list object by itself takes 64 bytes of memory, and each item adds 8 bytes of memory to the size of the list because of references to other objects.
Those numbers can easily fit in a 64-bit integer, so one would hope Python would store those million integers in no more than ~8MB: a million 8-byte objects. In fact, Python uses more like 35MB of RAM to store these numbers. Why? Because Python integers are objects, and objects have a lot of memory overhead.
The function psutil. virutal_memory() returns a named tuple about system memory usage. The third field in the tuple represents the percentage use of the memory(RAM). It is calculated by (total – available)/total * 100 .
Use the sys. getsizeof() method to get the memory usage of an object, e.g. sys. getsizeof(['a', 'b', 'c']) . The method takes an object argument and returns the size of the object in bytes.
Here's a fully working (python 2.7) example that has the same problem (I've slightly updated the original code for simplicity's sake)
from hurry.filesize import size
from pysize import get_size
import os
import psutil
def make_a_call():
return range(1000000)
def load_objects():
process = psutil.Process(os.getpid())
print "start method"
process = psutil.Process(os.getpid())
print"process consumes ", size(process.memory_info().rss)
objects = make_a_call()
# FIXME
print "total size of objects is ", size(get_size(objects))
print "process consumes ", size(process.memory_info().rss)
print "exit method"
def main():
process = psutil.Process(os.getpid())
print "process consumes " + size(process.memory_info().rss)
load_objects()
print "process consumes " + size(process.memory_info().rss)
main()
Here's the output:
process consumes 7M
start method
process consumes 7M
total size of objects is 30M
process consumes 124M
exit method
process consumes 124M
The difference is ~100Mb
And here's the fixed version of the code:
from hurry.filesize import size
from pysize import get_size
import os
import psutil
def make_a_call():
return range(1000000)
def load_objects():
process = psutil.Process(os.getpid())
print "start method"
process = psutil.Process(os.getpid())
print"process consumes ", size(process.memory_info().rss)
objects = make_a_call()
print "process consumes ", size(process.memory_info().rss)
print "total size of objects is ", size(get_size(objects))
print "exit method"
def main():
process = psutil.Process(os.getpid())
print "process consumes " + size(process.memory_info().rss)
load_objects()
print "process consumes " + size(process.memory_info().rss)
main()
And here is the updated output:
process consumes 7M
start method
process consumes 7M
process consumes 38M
total size of objects is 30M
exit method
process consumes 124M
Did you spot the difference? You're calculating object sizes before measuring the final process size and it leads to additional memory consumption. Lets check why it might be happening - here's the sources of https://github.com/bosswissam/pysize/blob/master/pysize.py:
import sys
import inspect
def get_size(obj, seen=None):
"""Recursively finds size of objects in bytes"""
size = sys.getsizeof(obj)
if seen is None:
seen = set()
obj_id = id(obj)
if obj_id in seen:
return 0
# Important mark as seen *before* entering recursion to gracefully handle
# self-referential objects
seen.add(obj_id)
if hasattr(obj, '__dict__'):
for cls in obj.__class__.__mro__:
if '__dict__' in cls.__dict__:
d = cls.__dict__['__dict__']
if inspect.isgetsetdescriptor(d) or inspect.ismemberdescriptor(d):
size += get_size(obj.__dict__, seen)
break
if isinstance(obj, dict):
size += sum((get_size(v, seen) for v in obj.values()))
size += sum((get_size(k, seen) for k in obj.keys()))
elif hasattr(obj, '__iter__') and not isinstance(obj, (str, bytes, bytearray)):
size += sum((get_size(i, seen) for i in obj))
return size
Lots of things are happening here! The most notable one is that it holds all the objects it has seen in a set to resolve circular references. If you remove that line it won't each that much memory in either case.
Here's a good article on the subject, quoting:
If you create a large object and delete it again, Python has probably released the memory, but the memory allocators involved don’t necessarily return the memory to the operating system, so it may look as if the Python process uses a lot more virtual memory than it actually uses.
Objects are never explicitly destroyed; however, when they become unreachable they may be garbage-collected. An implementation is allowed to postpone garbage collection or omit it altogether — it is a matter of implementation quality how garbage collection is implemented, as long as no objects are collected that are still reachable.
CPython implementation detail: CPython currently uses a reference-counting scheme with (optional) delayed detection of cyclically linked garbage, which collects most objects as soon as they become unreachable, but is not guaranteed to collect garbage containing circular references. See the documentation of the gc module for information on controlling the collection of cyclic garbage. Other implementations act differently and CPython may change. Do not depend on immediate finalization of objects when they become unreachable (so you should always close files explicitly).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With