We have the following configuration file for Unicorn. We're on Rails 3.2.12 and Mongoid 3.1.16. How should we determine how many worker processes to use? Are there other options we could include to boost performance?
Thanks!
# config/unicorn.rb
worker_processes Integer(ENV["WEB_CONCURRENCY"] || 3)
timeout 25
preload_app true
before_fork do |server, worker|
# TERM signals indicates the Heroku Dyno is shutting down
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
end
after_fork do |server, worker|
# TERM signals indicates the Heroku Dyno is shutting down
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
end
This currently accepted answer has some incorrect information in it.
According to the heroku docs on dyno size, 2X dynos have a 2X cpu share, not just double the memory:
Regardless, since the number of unicorn workers that can be used is bound to memory footprint, you'll most likely get more performance out of a 2X dyno where you can better use the available memory.
For example, you might be able to run one 2X dyno with 7 unicorn workers vs. two 1X dynos with 3 unicorn workers (6 total). There is also likely some routing overhead when using more dynos.
Via the heroku docs on optimizing dyno usage:
While highly app dependent, the following table lists some rough rules of thumb for how many Unicorn workers can be run on each dyno size:
With that in mind, the most important thing to know for optimizing usage is total memory footprint. You can enable heroku's log-runtime-metrics to have actual memory and cpu usage information printed to the heroku logs. It'll look something like this:
source=web.1 dyno=heroku.2808254.d97d0ea7-cf3d-411b-b453-d2943a50b456 sample#load_avg_1m=2.46 sample#load_avg_5m=1.06 sample#load_avg_15m=0.99
source=web.1 dyno=heroku.2808254.d97d0ea7-cf3d-411b-b453-d2943a50b456 sample#memory_total=21.00MB sample#memory_rss=21.22MB sample#memory_cache=0.00MB sample#memory_swap=0.00MB sample#memory_pgpgin=348836pages sample#memory_pgpgout=343403pages
As mentioned in the above linked article, you can use that in conjunction with the librato add-on to graph usage over time to get a better sense of your app's peak usage requirements:
The NewRelic add on can be used for a similar purpose.
Hope that helps.
There are two resources you need to run a Rails unicorn worker process: memory and CPU.
Most likely, you will run out of memory before you are able to exhaust the CPU resources on a Heroku dyno. Therefore, measure the loaded in-memory size of your app per unicorn worker and you get a rough number of workers you can fit with some headroom.
For example, if your app need about 110mb (common Rails 3.2 needs), you can fit about 4 on a single 1X dyno.
Heroku provides 2X dynos with more memory and CPU. I do not recommend 2X dynos because they have not delivered 2x performance in our benchmarks.
You can spin up a terminal on a dyno to manually run unicorn and measure the memory usage via:
> heroku run bash
> unicorn -c config/unicorn.rb & # Run unicorn in the background
> ps euf # Read RSS value for each worker, in kb - ie: 116040 ~ 116mb
You can view your Application configuration using:
> heroku config
> heroku config | grep WEB_CONCURRENCY # Filter config output to WEB_CONCURRENCY
EDIT: Heroku posted updated information about dyno sizing three months after I originally answered this.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With