I have a puma server running a ruby on rails app on an AWS EC2 instance. It was working fine for a while, but I found it responding with 502 errors a few hours later. The app is deployed with capistrano.
A simple restart of puma fixed the problem temporarily, but I want to prevent it happening again. Not quite sure what to try first.
Here's my capistrano puma config:
set :puma_rackup, -> { File.join(current_path, 'config.ru') }
set :puma_state, "#{shared_path}/tmp/pids/puma.state"
set :puma_pid, "#{shared_path}/tmp/pids/puma.pid"
set :puma_bind, "unix://#{shared_path}/tmp/sockets/puma.sock"
set :puma_conf, "#{shared_path}/puma.rb"
set :puma_access_log, "#{shared_path}/log/puma.error.log"
set :puma_error_log, "#{shared_path}/log/puma.access.log"
set :puma_role, :app
set :puma_env, fetch(:rack_env, fetch(:rails_env, 'production'))
set :puma_threads, [0, 8]
set :puma_workers, 0
set :puma_worker_timeout, nil
set :puma_init_active_record, true
set :puma_preload_app, false
set :bundle_gemfile, -> { release_path.join('Gemfile') }
Puma error log doesn't show any crashes.
Nginx error log shows (xx'd out client ip): 2016/08/09 06:25:52 [error] 1081#0: *348 connect() to unix:///home/deploy/myapp/shared/tmp/sockets/puma.sock failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: example.com, request: "POST /mypath HTTP/1.1", upstream: "http://unix:///home/deploy/myapp/shared/tmp/sockets/puma.sock:/mypath", host: "example.com"
From this issue on GitHub.
Ok, thanks for the config. That all looks fine so my guess is you're getting a process crash due to a bad extension. Since you're in production, I'd suggest uncommenting the workers line and using at least 2 workers. That will at least shield you from the crashes a little because the other worker will be able to handle traffic while the crashed one is automatically restarted.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With