Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

run multiple tornado processess

Tags:

python

tornado

I've read various articles and tutorials on how to run N number of Tornado processes, where N=number of cores. My code was working, running on all 16 cores but I somehow managed to screw it up and I need fresh eyes on this.

import tornado.ioloop
import tornado.web
import tornado.httpserver

from core import settings
from core import coreService
import services

from tornado.options import define, options, parse_command_line

define("port", default=settings.SERVER_PORT, help="run on the given port", type=int)



app = tornado.web.Application([
    (r'/upload', coreService.Upload)
])

if __name__ == "__main__":
    tornado.options.parse_command_line()
    server = tornado.httpserver.HTTPServer(app, max_buffer_size=1024*1024*201)
    server.bind(options.port)
    # autodetect cpu cores and fork one process per core
    server.start(0)
    try:        
        print 'running on port %s' % options.port
        tornado.ioloop.IOLoop.instance().start()

    except KeyboardInterrupt:
        tornado.ioloop.IOLoop.instance().stop()

This code throws this error:

File "/opt/tornado/tornado/process.py", line 112, in fork_processes
    raise RuntimeError("Cannot run in multiple processes: IOLoop instance "
RuntimeError: Cannot run in multiple processes: IOLoop instance has already been initialized. You cannot call IOLoop.instance() before calling start_processes()

I just don't see it. Thank you

:EDIT:

As Ben said, one of my methods was giving me trouble. This is the code of that method, someone may benefit from this:

from tornado import gen
import motor

db = motor.MotorClient().open_sync().proba
class Upload(BaseHandler):
    @gen.engine
    def post(self):
        fs = yield motor.Op(motor.MotorGridFS(db).open)

        gridin = yield motor.Op(fs.new_file)
        yield motor.Op(gridin.write, 'First part\n')
        yield motor.Op(gridin.write, 'Second part')
        yield motor.Op(gridin.close)

        print gridin._id
        self.write(str(gridin._id))
        self.finish()

EDIT 2

I've found the final solution to my problem. As pointed out by Ben the method above was giving me trouble. The correct way to include Motor with Tornado application is documented on Motor documentation. Here is an except which is working for me:

if __name__ == "__main__":
    tornado.options.parse_command_line()        
    try:
        server = tornado.httpserver.HTTPServer(app, max_buffer_size=1024*1024*201)
        server.bind(8888)
        server.start(0) # autodetect cpu cores and fork one process per core
        db = motor.MotorClient().open_sync().proba
        print 'running on port %s' % options.port
        # Delayed initialization of settings
        app.settings['db'] = db # from this point on db is available as self.settings['db']
        tornado.ioloop.IOLoop.instance().start()

    except KeyboardInterrupt:
        tornado.ioloop.IOLoop.instance().stop()
like image 313
ivica Avatar asked Mar 25 '14 16:03

ivica


2 Answers

This exception raises with debug mode of tornado.web.Application.

application = tornado.web.Application([
    (r"/", hello),
],
debug=False)

Set debug to False to fix this problem.

You can start several processe to listen each port:

server = tornado.httpserver.HTTPServer(application)
server.bind(1234)  # port
server.start(4) 
tornado.ioloop.IOLoop.instance().start()
like image 185
user1941407 Avatar answered Sep 27 '22 18:09

user1941407


It works as expected if I comment out the "core" and "services" imports. Something in one of those modules must be initializing the singleton event loop (perhaps indirectly, e.g. by creating a global AsyncHTTPClient instance). This warning is protecting you from the fact that these objects that were created in the parent process won't work in the child. You'll have to find the places where these objects are being created (unfortunately there aren't good tools for this) and move them after the fork so they are created in the child processes instead.

like image 22
Ben Darnell Avatar answered Sep 27 '22 19:09

Ben Darnell