Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What is the optimal --max-requests setting for Starman?

I am running a Dancer (v1.3202) app with Starman (v0.4014) and ngynx as a front end proxy. I am noticing a huge latency spike in my load balancer every couple of hours and wonder if it's the workers reaching their request limit and restarting. The latency goes from 30ms average to 1000ms or more. I checked the MongoDB and there are no long running queries. What does the --max-requests actually do regarding the workers and what happens when a worker reaches this limit?

like image 230
MadHacker Avatar asked Oct 28 '16 20:10

MadHacker


1 Answers

What does the --max-requests setting do?

From starman --help:

--max-requests Number of the requests to process per one worker process. Defaults to 1000.

What this means is that each worker will exit after it processes that many requests. The master process will then launch a brand new worker for each worker that exits, maintaining the number of workers according to the --workers setting.

Using --max-requests is usually a good thing, especially if your app isn't the only thing running on the box, because perl (notoriously) does not give back memory that it uses. This recycling of worker processes is the way starman can give memory back for other processes to use. If your app actually leaks memory, this can also help keep your app running with good performance as opposed to your app eventually consuming all the memory and needing to be killed by the OS.

What is the optimal value for the --max-requests setting?

You should leave it at its default value of 1,000 unless you have a good reason to change it. If your app is the only thing running on the box and you're sure that it's not leaky, you could try using a higher value to recycle workers less often. If you know your app is leaky, you may want to use a lower value to recycle workers more often. However, generally this setting should actually have very little impact on performance.

That said, recycling workers could be responsible for spurious slow requests if your workers cache stuff in memory because new workers would need to spend some time rebuilding those caches, but there could be many other possible explanations. You'll need to do some profiling to find out what's really causing the specific slowness you're seeing.

like image 200
ccm Avatar answered Oct 27 '22 01:10

ccm