I have a large long-running server, and, over weeks the memory usage steadily climbs.
Generally, as pointed out below, its unlikely that leaks are my problem; however, I have not got a lot to go on so I want to see if there are any leaks.
Getting at console output is tricky so I'm not running with gc.set_debug()
. This is not a big problem though, as I have easily added an API to get it to run gc.collect()
and then iterate through gc.garbage
and send the results back out to me over HTTP.
My problem is that running it locally for a short time my gc.garbage
is always empty. I can't test my bit of code that lists the leaks before I deploy it.
Is there a trivial recipe for creating an uncollectable bit of garbage so I can test my code that lists the garbage?
Any cycle of finalizable objects (that is, objects with a __del__
method) is uncollectable (because the garbage collector does not know which order to run the finalizers in):
>>> class Finalizable:
... def __del__(self): pass
...
>>> a = Finalizable()
>>> b = Finalizable()
>>> a.x = b
>>> b.x = a
>>> del a
>>> del b
>>> import gc
>>> gc.collect()
4
>>> gc.garbage
[<__main__.Finalizable instance at 0x1004e0b48>,
<__main__.Finalizable instance at 0x1004e73f8>]
But as a general point, it seems unlikely to me that your problem is due to uncollectable garbage, unless you are in the habit of using finalizers. It's more likely due to the accumulation of live objects, or to fragmentation of memory (since Python uses a non-moving collector).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With