I have an AMQP server (RabbitMQ) that I would like to both publish and read from in a Tornado web server. To do this, I figured I would use an asynchronous amqp python library; in particular Pika (a variation of it that supposedly supports Tornado).
I have written code that appears to successfully read from the queue, except that at the end of the request, I get an exception (the browser returns fine):
[E 101219 01:07:35 web:868] Uncaught exception GET / (127.0.0.1)
HTTPRequest(protocol='http', host='localhost:5000', method='GET', uri='/', version='HTTP/1.1', remote_ip='127.0.0.1', remote_ip='127.0.0.1', body='', headers={'Host': 'localhost:5000', 'Accept-Language': 'en-us,en;q=0.5', 'Accept-Encoding': 'gzip,deflate', 'Keep-Alive': '115', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'User-Agent': 'Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101206 Ubuntu/10.10 (maverick) Firefox/3.6.13', 'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'Connection': 'keep-alive', 'Cache-Control': 'max-age=0', 'If-None-Match': '"58f554b64ed24495235171596351069588d0260e"'})
Traceback (most recent call last):
File "/home/dave/devel/lib/python2.6/site-packages/tornado/web.py", line 810, in _stack_context
yield
File "/home/dave/devel/lib/python2.6/site-packages/tornado/stack_context.py", line 77, in StackContext
yield
File "/usr/lib/python2.6/contextlib.py", line 113, in nested
yield vars
File "/home/dave/lib/python2.6/site-packages/tornado/stack_context.py", line 126, in wrapped
callback(*args, **kwargs)
File "/home/dave/devel/src/pika/pika/tornado_adapter.py", line 42, in _handle_events
self._handle_read()
File "/home/dave/devel/src/pika/pika/tornado_adapter.py", line 66, in _handle_read
self.on_data_available(chunk)
File "/home/dave/devel/src/pika/pika/connection.py", line 521, in on_data_available
self.channels[frame.channel_number].frame_handler(frame)
KeyError: 1
I'm not entirely sure I am using this library correctly, so I might be doing something blatantly wrong. The basic flow of my code is:
My questions are a few:
I will try to have some sample code up a little later, but the steps I described above lay out the consuming side of things fairly completely. I am having issues with the publishing side as well, but the consuming of queues is more pressing.
It would help to see some source code, but I use this same tornado-supporting pika module without issue in more than one production project.
You don't want to create a connection per request. Create a class that wraps all of your AMQP operations, and instantiate it as a singleton at the tornado Application level that can be used across requests (and across request handlers). I do this in a 'runapp()' function that does some stuff like this and then starts the main tornado ioloop.
Here's a class called 'Events'. It's a partial implementation (specifically, I don't define 'self.handle_event' here. That's up to you.
class Event(object):
def __init__(self, config):
self.host = 'localhost'
self.port = '5672'
self.vhost = '/'
self.user = 'foo'
self.exchange = 'myx'
self.queue = 'myq'
self.recv_routing_key = 'msgs4me'
self.passwd = 'bar'
self.connected = False
self.connect()
def connect(self):
credentials = pika.PlainCredentials(self.user, self.passwd)
parameters = pika.ConnectionParameters(host = self.host,
port = self.port,
virtual_host = self.vhost,
credentials = credentials)
srs = pika.connection.SimpleReconnectionStrategy()
logging.debug('Events: Connecting to AMQP Broker: %s:%i' % (self.host,
self.port))
self.connection = tornado_adapter.TornadoConnection(parameters,
wait_for_open = False,
reconnection_strategy = srs,
callback = self.on_connected)
def on_connected(self):
# Open the channel
logging.debug("Events: Opening a channel")
self.channel = self.connection.channel()
# Declare our exchange
logging.debug("Events: Declaring the %s exchange" % self.exchange)
self.channel.exchange_declare(exchange = self.exchange,
type = "fanout",
auto_delete = False,
durable = True)
# Declare our queue for this process
logging.debug("Events: Declaring the %s queue" % self.queue)
self.channel.queue_declare(queue = self.queue,
auto_delete = False,
exclusive = False,
durable = True)
# Bind to the exchange
self.channel.queue_bind(exchange = self.exchange,
queue = self.queue,
routing_key = self.recv_routing_key)
self.channel.basic_consume(consumer = self.handle_event, queue = self.queue, no_ack = True)
# We should be connected if we made it this far
self.connected = True
And then I put that in a file called 'events.py'. My RequestHandlers and any back end code all utilize a 'common.py' module that wraps code that's useful to both (my RequestHandlers don't call any amqp module methods directly -- same for db, cache, etc as well), so I define 'events=None' at the module level in common.py, and I instantiate the Event object kinda like this:
import events
def runapp(config):
if myapp.common.events is None:
myapp.common.events = myapp.events.Event(config)
logging.debug("MYAPP.COMMON.EVENTS: %s", myapp.common.events)
http_server = tornado.httpserver.HTTPServer(app,
xheaders=config['HTTPServer']['xheaders'],
no_keep_alive=config['HTTPServer']['no_keep_alive'])
http_server.listen(port)
main_loop = tornado.ioloop.IOLoop.instance()
logging.debug("MAIN IOLOOP: %s", main_loop)
main_loop.start()
Happy new year :-D
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With