Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I make Rails.cache (in-memory cache) work with Puma?

I'm using Rails 5.1. I have application-wide memory_store caching happening with Rails. This is set up in my config/environments/development.rb file

  £ Enable/disable caching. By default caching is disabled.
  if Rails.root.join('tmp/caching-dev.txt').exist?
    config.action_controller.perform_caching = true

    config.cache_store = :memory_store
    config.public_file_server.headers = {
      'Cache-Control' => 'public, max-age=172800'
    }
  else
    config.action_controller.perform_caching = true
    config.cache_store = :memory_store
  end

This allows me to do things like

      Rails.cache.fetch(cache_key) do
        msg_data
      end

in one part of my application (a web socket) and access that data in another part of my application (a controller). However, what I'm noticing is that if I start my Rails server with puma running (e.g. include the below file at config/puma.rb) ...

threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }.to_i
threads threads_count, threads_count

£ Specifies the `port` that Puma will listen on to receive requests, default is 3000.
£
port        ENV.fetch("PORT") { 3000 }

£ Specifies the number of `workers` to boot in clustered mode.
£ Workers are forked webserver processes. If using threads and workers together
£ the concurrency of the application would be max `threads` * `workers`.
£ Workers do not work on JRuby or Windows (both of which do not support
£ processes).
£
workers ENV.fetch("WEB_CONCURRENCY") { 4 }

app_dir = File.expand_path("../..", __FILE__)
shared_dir = "£{app_dir}/shared"

£ Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env

£ Set up socket location
bind "unix://£{shared_dir}/sockets/puma.sock"

£ Logging
stdout_redirect "£{shared_dir}/log/puma.stdout.log", "£{shared_dir}/log/puma.stderr.log", true

£ Set master PID and state locations
pidfile "£{shared_dir}/pids/puma.pid"
state_path "£{shared_dir}/pids/puma.state"
activate_control_app





£ Use the `preload_app!` method when specifying a `workers` number.
£ This directive tells Puma to first boot the application and load code
£ before forking the application. This takes advantage of Copy On Write
£ process behavior so workers use less memory. If you use this option
£ you need to make sure to reconnect any threads in the `on_worker_boot`
£ block.
£
£ preload_app!

£ The code in the `on_worker_boot` will be called if you are using
£ clustered mode by specifying a number of `workers`. After each worker
£ process is booted this block will be run, if you are using `preload_app!`
£ option you will want to use this block to reconnect to any threads
£ or connections that may have been created at application boot, Ruby
£ cannot share connections between processes.
£
on_worker_boot do
  require "active_record"
  ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
  ActiveRecord::Base.establish_connection(YAML.load_file("£{app_dir}/config/database.yml")[rails_env])
end

£ Allow puma to be restarted by `rails restart` command.
plugin :tmp_restart

In memory caching no longer works. In other words

Rails.cache.fetch(cache_key)

always returns nothing. I would like to have a multi-threaded puma environment (eventually) to gracefully handle many requests. However I'd also like my cache to work. How can I get them to both play together?

like image 864
Dave Avatar asked Jun 01 '18 22:06

Dave


1 Answers

You can't use memory_store with puma running in clustered mode (i.e. with multiple workers). It says so right here in the Rails guide. You can't share memory between separate processes, so this clearly stands to reason.

If reducing puma workers down to 1 is not an option, then consider using Redis or Memcached instead. The documentation in the Rails guide is quite complete in this regard - you'll need to add a gem or two to your Gemfile, and update config.cache_store. You will need to install the relevant service on the box, or alternatively there are plenty of hosted service providers that will manage it for you (Heroku Redis, Redis To Go, Memcachier etc)

like image 115
gwcodes Avatar answered Sep 28 '22 08:09

gwcodes