I'm currently using a combination of Rails, Docker, Nginx (Both Rails and Nginx are served as Docker images). I honestly don't know what's wrong in this case. Rails is serving old and non-existent JavaScript and CSS files in production. It's definitely a cache issue. How do I know? I loaded the previous Docker image (which was working) copied and pasted the old URL into the latest image, and they worked! Even if they aren't in the project whatsoever!
I've done some research and haven't found the issue:
This is what I've done:
docker system prune -a -p
)./tmp/cache/assets
folder from RailsRAILS_ENV='production' rails assets:precompile
, rails assets:precompile
, RAILS_ENV='production' rails assets:clean
, rails assets:clean
, rails assets:clobber
, RAILS_ENV='production' rails assets:clobber
public/assets
folder. And still nothingconfig.assets.cache_store = :null_store
What do I have:
Here's the portion of Nginx that serve the assets:
# We enable gzip as a compression mechanism.
location ~ ^/(assets|images|javascripts|stylesheets)/ {
try_files $uri @rails;
access_log off;
gzip_static on;
# to serve pre-gzipped version
expires max;
add_header Cache-Control public;
add_header Last-Modified "";
add_header ETag "";
break;
}
Any ideas? I'm getting error 500 on the new CSS, and JavaScript.
Edit: One more thing. Rails does show the correct URL for the newer assets, but they are hit by a 500 Server error.
Edit x2 (Added Docker Compose Files)*: This one is used in development:
# WARNING!! Indentation is important! Be careful how you indent.
# All paths that point to the actual disk (not the Docker image)
# are relative to the location of *this* file!
# This is the development version of the file. The production one, the
# one that you need to upload is in ./docker-server/docker-compose.yml.
version: '3'
services:
db:
image: mariadb:10.3.5
restart: always
environment:
MYSQL_ROOT_PASSWORD: "rootPassword"
MYSQL_USER: "ruby"
MYSQL_PASSWORD: "userPassword"
MYSQL_DATABASE: "dev"
ports:
- "3306:3306"
volumes:
- db-data:/var/lib/mysql/data
- ./db/rails_cprint.sql:/docker-entrypoint-initdb.d/rails_cprint.sql:ro
networks:
- db
pma:
image: phpmyadmin/phpmyadmin
depends_on:
- db
ports:
- "4000:80"
networks:
- db
app:
build: .
depends_on:
- db
environment:
RAILS_ENV: development
LOGSTASH_HOST: localhost
SECRET_MYSQL_HOST: 'db'
SECRET_MYSQL_DATABASE: 'dev'
SECRET_MYSQL_USERNAME: 'ruby'
SECRET_MYSQL_PASSWORD: 'userPassword'
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3001 -b '0.0.0.0'"
stdin_open: true
tty: true
links:
- db
volumes:
- "./:/var/www/cprint"
ports:
- "3001:3001"
- "1234:1234"
expose:
- "3001"
networks:
- elk
- db
ipmask:
build: ./reverse_proxy .
restart: always
command: "npm run debug"
ports:
- "5050:5050"
- "9229:9229"
volumes:
- "./reverse_proxy/:/var/www/cprint"
networks:
- db
- elk
# Only on development!!
depends_on:
- db
# Volumes are the recommended storage mechanism of Docker.
volumes:
db-data:
driver: local
elasticsearch:
driver: local
networks:
elk:
driver: bridge
db:
driver: bridge
This is the one used in production:
# This is the production docker-compose.ymlf ile.
# This is a docker compose file that will pull from the private
# repo and will use all the images.
# This will be an equivalent for production.
# The version is super important.
version: '3.2'
services:
app:
image: #The private rails URL rails:latest
restart: always
environment:
RAILS_ENV: production
RAILS_PRODUCTION_FULL_DEBUG: 'true'
RAILS_LOG_TO_STDOUT: 'true'
# https://github.com/docker/compose/issues/1393
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -e production -p 5000 -b '0.0.0.0'"
volumes:
- /var/www/app
ports:
- "5000:5000"
expose:
- "5000"
networks:
- elk
links:
- logstash
# Uses Nginx as a web server
# https://stackoverflow.com/questions/30652299/having-docker-access-external-files
#
web:
image: # the private NGINX image URL
# Runs it in debug
# command: [nginx-debug, '-g', 'daemon off;']
depends_on:
- elasticsearch
- kibana
- app
- ipmask
restart: always
# Maps the SSL at the same exact location in the server.
volumes:
# https://stackoverflow.com/a/48800695/1057052
# - "/etc/ssl/:/etc/ssl/"
- type: bind
source: /etc/ssl/certs
target: /etc/ssl/certs
- type: bind
source: /etc/ssl/private/
target: /etc/ssl/private
- type: bind
source: /etc/nginx/.htpasswd
target: /etc/nginx/.htpasswd
- type: bind
source: /etc/letsencrypt/
target: /etc/letsencrypt/
ports:
- "80:80"
- "443:443"
networks:
- elk
- nginx
links:
- elasticsearch
- kibana
# Defining the ELK Stack!
# If you're moving servers, check the nmap issue.
# https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.3
restart: always
container_name: elasticsearch
networks:
- elk
# Default config from elastic.co
environment:
- cluster.name=docker-cluster
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
# Default config from elastic.co
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- elasticsearch:/usr/share/elasticsearch/data
ports:
- 9200:9200
logstash:
image: docker.elastic.co/logstash/logstash:6.2.3
restart: always
container_name: logstash
volumes:
- ./elk/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ./elk/logstash/pipeline/logstash.conf:/etc/logstash/conf.d/logstash.conf
command: logstash -f /etc/logstash/conf.d/logstash.conf
ports:
- "5228:5228"
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:6.2.3
restart: always
volumes:
- ./elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- "5601:5601"
networks:
- elk
links:
- elasticsearch
depends_on:
- elasticsearch
ipmask:
image: # the private image URL
command: "npm start"
restart: always
environment:
- "NODE_ENV=production"
expose:
- "5050"
ports:
- "5050:5050"
links:
- app
networks:
- nginx
# # Volumes are the recommended storage mechanism of Docker.
volumes:
elasticsearch:
driver: local
rails:
driver: local
networks:
elk:
driver: bridge
nginx:
driver: bridge
Ruby Dockerfile:
# Main Dockerfile that contains the Rails application.
# https://docs.docker.com/compose/rails/#define-the-project
FROM ruby:2.5.0
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs vim
ENV RAILS_ROOT /var/www/app
RUN mkdir -p $RAILS_ROOT
WORKDIR $RAILS_ROOT
COPY Gemfile ./
COPY Gemfile.lock ./
RUN bundle install
COPY . .
Note: I have stripped out all sensible information from it.
Edit x 3 I found the problem. I'm looking into solutions. I dug into Nginx's Docker image and I saw Rails' public folder listed. I opened it, and found out that its assets are the old ones. I'll post back once I find the correct solution.
Edit x 4 Just the normal Nginx 500 error:
Found the solution!
TL;DR
Rails was not the culprit, nor Docker... it was me (Figures 🙄). The problem was that I manually copied the public folder when building the Docker image for the Nginx container, but I never mapped it out as a shared volume between Rails and Nginx.
I forgot to post my Nginx Dockerfile. It contained a line that said:
# copy over static assets
COPY public public/
Which copied Rails' public folder to the Docker image. The caveat was that this ran only when I rebuilt the image! Since there were no changes made to Nginx, there was no need to rebuild the image!
The fix was to create a shared volume in the docker-compose.yml
between Rails and Nginx:
# Some lines are omitted
app:
image: rails:latest
restart: always
environment:
RAILS_ENV: production
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -e production -p 5000 -b '0.0.0.0'"
volumes:
- public-files:/var/www/app/public
web:
image: nginx:latest
# Runs it in debug
# command: [nginx-debug, '-g', 'daemon off;']
depends_on:
- elasticsearch
- kibana
- app
- ipmask
restart: always
# Maps the SSL at the same exact location in the server.
volumes:
# https://stackoverflow.com/a/48800695/1057052
# - "/etc/ssl/:/etc/ssl/"
# We need to map this so Nginx can read the public files
- public-files:/var/www/app/public:ro
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With