Is there any method I can override that will allow me to use print statements / pdb / etc. to keep track of every time an instance of my class is allocated? While unpickling some objects I am seeming to get some that never have either __setstate__
or __init__
called on them. I tried overriding __new__
and printing out the id of every object I make in __new__
, but I am still encountering objects with ids that were never printed.
Edit: here is my code I use for altering (instrumenting) __new__
of my class and all of its super-classes except for object
itself:
class Allocator:
def __init__(self, my_class):
self.my_class = my_class
self.old_new = my_class.__new__
def new(self, * args, ** kargs):
rval = self.old_new(*args, ** kargs)
#rval = super(self.my_class,cls).__new__(cls)
print 'Made '+str(self.my_class)+' with id '+str(id(rval))
return rval
def replace_allocator(cls):
if cls == object:
return
setattr(cls,'__new__',Allocator(cls).new)
print cls.__base__
try:
for parent in cls.__base__:
replace_allocator(parent)
except:
replace_allocator(cls.__base__)
I call replace_allocator on my classes' parent class as soon as it is imported in the main script. My class has a custom __new__
to begin with, which also prints out the id.
To trace most memory blocks allocated by Python, the module should be started as early as possible by setting the PYTHONTRACEMALLOC environment variable to 1 , or by using -X tracemalloc command line option. The tracemalloc. start() function can be called at runtime to start tracing Python memory allocations.
The allocation happens on contiguous blocks of memory. We call it stack memory allocation because the allocation happens in the function call stack. The size of memory to be allocated is known to the compiler and whenever a function is called, its variables get memory allocated on the stack.
Due to its simplicity, however, Python does not provide you much freedom in managing memory usage, unlike in languages like C++ where you can manually allocate and free memory. However, having a good understanding of Python memory management is a great start that will enable you to write more efficient code.
Python doesn't limit memory usage on your program. It will allocate as much memory as your program needs until your computer is out of memory. The most you can do is reduce the limit to a fixed upper cap. That can be done with the resource module, but it isn't what you're looking for.
(This is more of a comment than an answer.)
Quoting Guido's Unifying types and classes in Python 2.2:
There are situations where a new instance is created without calling
__init__
(for example when the instance is loaded from a pickle). There is no way to create a new instance without calling__new__
(although in some cases you can get away with calling a base class's__new__
).
If you are using new-style classes (descendants of object
), __new__()
should always be called. I don't think the obscure cases "you can get away with calling a base class's __new__
" in will happen accidently, though I don't know what these cases actually are.
And just to add an example:
In [1]: class A(object):
...: def __new__(cls):
...: print "A"
...: return object.__new__(cls)
...:
In [2]: A()
A
Out[2]: <__main__.A object at 0xa3a95cc>
In [4]: object.__new__(A)
Out[4]: <__main__.A object at 0xa3a974c>
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With