I carried out a test on cherrypy (using web.py as a framework) and tornado retrieving webpages from the internet.
I have three test cases using siege
to send requests to server (-c means number of users; -t is testing time). Code is below the test results.
siege ip -c20 -t100s server can handle 2747requests
siege ip -c200 -t30s server can handle 1361requests
siege ip -c500 -t30s server can handle 170requests
siege ip -c20 -t100s server can handle 600requests
siege ip -c200 -t30s server can handle 200requests
siege ip -c500 -t30s server can handle 116requests
siege ip -c20 -t100s server can handle 3022requests
siege ip -c200 -t30s server can handle 2259requests
siege ip -c500 -t30s server can handle 471requests
tornado synchronous < web.py (cherrypy) < tornado asynchronous
I know, using an asynchronous architecture can improve the performance of a web server dramatically.
I'm curious about the difference between tornado asynchronous architecture and web.py (cherry).
I think tornado synchronous mode handles requests one by one, but how is cherrypy working, using multiple threads? But I didn't see a large increase of memory. Cherrypy might handle multiple requests concurrently. How does it solve the blocking of a program?
Can I improve the performance of tornado synchronous mode without using asynchronous techniques? I think tornado can do better.
import web
import tornado.httpclient
urls = (
'/(.*)', 'hello'
)
app = web.application(urls, globals())
class hello:
def GET(self, name):
client = tornado.httpclient.HTTPClient()
response=client.fetch("http://www.baidu.com/")
return response.body
if __name__ == "__main__":
app.run()
import tornado.ioloop
import tornado.options
import tornado.web
import tornado.httpclient
from tornado.options import define, options
define("port", default=8000, help="run on the given port", type=int)
class IndexHandler(tornado.web.RequestHandler):
def get(self):
client = tornado.httpclient.HTTPClient()
response = client.fetch("http://www.baidu.com/" )
self.write(response.body)
if __name__=='__main__':
tornado.options.parse_command_line()
app=tornado.web.Application(handlers=[(r'/',IndexHandler)])
http_server=tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
import tornado.httpserver
import tornado.ioloop
import tornado.options
import tornado.web
import tornado.httpclient
from tornado.options import define, options
define("port", default=8001, help="run on the given port", type=int)
class IndexHandler(tornado.web.RequestHandler):
@tornado.web.asynchronous
def get(self):
client = tornado.httpclient.AsyncHTTPClient()
response = client.fetch("http://www.baidu.com/" ,callback=self.on_response)
def on_response(self,response):
self.write(response.body)
self.finish()
if __name__=='__main__':
tornado.options.parse_command_line()
app=tornado.web.Application(handlers=[(r'/',IndexHandler)])
http_server=tornado.httpserver.HTTPServer(app)
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
To answer question 1...
Tornado is single threaded. If you block the main thread, as you do in your synchronous example, then that single thread cannot do anything until the blocking call returns. This limits the synchronous example to one request at a time.
I am not particularly familiar with web.py, but looking at the source for its HTTP server it appears to be using a threading mixin, which suggests that it is not limited to handling one request at a time. When the first request comes in, it is handled by a single thread. That thread will block until the HTTP client call returns, but other threads are free to handle further incoming requests. This allows for more requests to be processed at once.
I suspect if you emulated this with Tornado, eg, by handing off HTTP client requests to a thread pool, then you'd see similar throughput.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With