I've been trying to use tornado-redis (which is basically a fork of brükva slightly modified to work with tornado.gen interface instead of adisp) in order to deliver events by using redis' pubsub.
So I wrote down a little script to test things out inspired by this example.
import os
from tornado import ioloop, gen
import tornadoredis
print os.getpid()
def on_message(msg):
print msg
@gen.engine
def listen():
c = tornadoredis.Client()
c.connect()
yield gen.Task(c.subscribe, 'channel')
c.listen(on_message)
listen()
ioloop.IOLoop.instance().start()
Unfortunately, as I PUBLISH
ed through redis-cli
memory usage kept on rising.
In order to profile memory usage I first tried to use guppy-pe but it wouldn’t work under python 2.7 (Yes even tried trunk) so I fell back to pympler.
import os
from pympler import tracker
from tornado import ioloop, gen
import tornadoredis
print os.getpid()
class MessageHandler(object):
def __init__(self):
self.memory_tracker = tracker.SummaryTracker()
def on_message(self, msg):
self.memory_tracker.print_diff()
@gen.engine
def listen():
c = tornadoredis.Client()
c.connect()
yield gen.Task(c.subscribe, 'channel')
c.listen(MessageHandler().on_message)
listen()
ioloop.IOLoop.instance().start()
Now each time I PUBLISH
ed I could see that some objects were never released:
types | # objects | total size
===================================================== | =========== | ============
dict | 32 | 14.75 KB
tuple | 41 | 3.66 KB
set | 8 | 1.81 KB
instancemethod | 16 | 1.25 KB
cell | 22 | 1.20 KB
function (handle_exception) | 8 | 960 B
function (inner) | 7 | 840 B
generator | 8 | 640 B
<class 'tornado.gen.Task | 8 | 512 B
<class 'tornado.gen.Runner | 8 | 512 B
<class 'tornado.stack_context.ExceptionStackContext | 8 | 512 B
list | 3 | 504 B
str | 7 | 353 B
int | 7 | 168 B
builtin_function_or_method | 2 | 144 B
types | # objects | total size
===================================================== | =========== | ============
dict | 32 | 14.75 KB
tuple | 42 | 4.23 KB
set | 8 | 1.81 KB
cell | 24 | 1.31 KB
instancemethod | 16 | 1.25 KB
function (handle_exception) | 8 | 960 B
function (inner) | 8 | 960 B
generator | 8 | 640 B
<class 'tornado.gen.Task | 8 | 512 B
<class 'tornado.gen.Runner | 8 | 512 B
<class 'tornado.stack_context.ExceptionStackContext | 8 | 512 B
object | 8 | 128 B
str | 2 | 116 B
int | 1 | 24 B
types | # objects | total size
===================================================== | =========== | ============
dict | 32 | 14.75 KB
tuple | 42 | 4.73 KB
set | 8 | 1.81 KB
cell | 24 | 1.31 KB
instancemethod | 16 | 1.25 KB
function (handle_exception) | 8 | 960 B
function (inner) | 8 | 960 B
generator | 8 | 640 B
<class 'tornado.gen.Task | 8 | 512 B
<class 'tornado.gen.Runner | 8 | 512 B
<class 'tornado.stack_context.ExceptionStackContext | 8 | 512 B
list | 0 | 240 B
object | 8 | 128 B
int | -1 | -24 B
str | 0 | -34 B
Now that I know there's really a memory leak, how do I track where those objects are created? I guess I should start here?
Upgrading Tornado to version 2.3 should fix this issue.
I was having the same issue where ExceptionStackContext were leaking very quickly. It was related to this bug report: https://github.com/facebook/tornado/issues/507 and fixed in this commit: https://github.com/facebook/tornado/commit/57a3f83fc6b6fa4d9c207dc078a337260863ff99. Upgrading to 2.3 took care of the issue for me.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With