Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why use nginx with Catalyst/Plack/Starman?

I am trying to deploy my little Catalyst web app using Plack/Starman. All the documentation seems to suggest I want to use this in combination with nginx. What are the benefits of this? Why not use Starman straight up on port 80?

like image 459
Eric Johnson Avatar asked May 30 '10 16:05

Eric Johnson


3 Answers

It doesn't have to be nginx in particular, but you want some kind of frontend server proxying to your application server for a few reasons:

  1. So that you can run the Catalyst server on a high port, as an ordinary user, while running the frontend server on port 80.

  2. To serve static files (ordinary resources like images, JS, and CSS, as well as any sort of downloads you might want to use X-Sendfile or X-Accel-Redirect with) without tying up a perl process for the duration of the download.

  3. It makes things easier if you want to move on to a more complicated config involving e.g. Edge Side Includes, or having the webserver serve directly from memcached or mogilefs (both things that nginx can do), or a load-balancing / HA config.

like image 197
hobbs Avatar answered Nov 03 '22 23:11

hobbs


I asked this question on #plack and got the following response from @nothingmuch (I added formatting):

With nginx you can set up loadbalancing/failover type stuff. If the site is small/simple it might be overkill.

I don't know of any disadvantages Starman might have. Perhaps if you have many hits on static files nginx would use less cpu/memory to handle them, but it's unlikely to be significant in a typical web app. Big downloads might tie up Starman workers for static file downloads though. (Perhaps not, with sendfile.) That's about all I can think of.

...A failover setup can be nice if you want to do upgrades with no downtime. ("Fail" the old version.)

like image 27
Eric Johnson Avatar answered Nov 03 '22 23:11

Eric Johnson


Another reason is that a lightweight frontend server (even Apache is OK) consumes much less memory per connection than a typical Starman process (a couple of MB vs. tens or more than 100 MB). Since a connection is open for some time, especially if you want to use keep-alive connections, you can support a large number of simultaneous connections with much less RAM. Only make sure that the buffer size of the proxying frontend server is large enough to load a typical HTTP response immediately from the backend. Then the backend is free to process the next request.

like image 3
nwellnhof Avatar answered Nov 03 '22 22:11

nwellnhof