Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Nginx Reload Configuration Best Practice

Tags:

nginx

Currently setting up a nginx reverse-proxy load-balancing a wide variety of domain names.

nginx configuration files are programatically generated and might change very often (ie add or delete http/https servers)

I am using:

nginx -s reload

To tell nginx to re-read the configuration.

the main nginx.conf file contain an include of all the generated configuration files as such:

http {
  include /volumes/config/*/domain.conf;
}

Included configuration file might look like this:

server {
  listen 80;
  listen [::]:80;
  server_name mydomain.com;
  location / {
    try_files $uri /404.html /404.htm =404;
    root /volumes/sites/mydomain;
  }
}

My question:

Is it healthy or considered harmfull to run:

nginx -s reload

multiple times per minutes to notify nginx to take into account modifications on the configuration? What kind of performance hit would that imply ?

EDIT: I'd like to reformulate the question: How can we make it possible to dynamically change the configuration of nginx very often without a big perfomance hit ?

like image 472
Crappy Avatar asked Dec 08 '16 03:12

Crappy


2 Answers

I would use inotifywatch with a timeout on the directory containing the generated conf files and reload nginx only if something was modified/created/deleted in said directory during that time:

-t , --timeout
Listen only for the specified amount of seconds. If not specified, inotifywatch will gather statistics until receiving an interrupt signal by (for example) pressing CONTROL-C at the console.

while true; do
    if [[ "$(inotifywatch -e modify,create,delete -t 30 /volumes/config/ 2>&1)" =~ filename ]]; then
        service nginx reload;
    fi;
done

This way you set up a minimum timer after which the reloads will take place and you don't lose any watches between calls to inotifywait.

like image 151
alindt Avatar answered Sep 24 '22 16:09

alindt


If you

  1. Use a script similar to what's provided in this answer, let's call it check_nginx_confs.sh
  2. Change your ExecStart directive in nginx.service so /etc/nginx/ is /dev/shm/nginx/
  3. Add a script to /etc/init.d/ to copy conf files to your temp dir ------------------------ mkdir /dev/shm/nginx && cp /etc/nginx/* /dev/shm/nginx
  4. Use rsync (or other sync tool) to sync /dev/shm/nginx back to /etc/nginx; so you dont lose config files created in /dev/shm/nginx on reboot. Or simply make both locations in-app, for atomic checks as desired
  5. Set a cronjob to run check_nginx_confs.sh as often as files 'turn old' in check_nginx_confs.sh, so you know if a change happened within the last time window but only check once
  6. Only systemctl reload ngnix if check_nginx_confs.sh finds a new file, once per time period defined by $OLDTIME
  7. Rest

Now nginx will load those configs much, much faster; from RAM. It will only reload once every $OLDTIME seconds and only if it needs to. Beyond just routing requests to a dynamic handler of your own; this is probably the fastest you get nginx to reload frequently

It's a good idea to reserve a certain disk quota to the temp directory you use, to ensure you don't run out of memory. There are various ways of accomplishing that. You can also add a symlink to an empty, on-disk directory in case you have to spill over but that'd be a lot of confs

Script from other answer:

#!/bin/sh

# Input file
TESTDIR=/dev/shm/nginx
# How many seconds before dir is deemed "older"
OLDTIME=75 
#add a little grace period, optional
# Get current and file times
CURTIME=$(date +%s)
FILETIME=$(date -r $TESTDIR +%s)
TIMEDIFF=$(expr $CURTIME - $FILETIME)

# Check if dir updated in last 120 seconds
if [ $OLDTIME -gt $TIMEDIFF ]; then
   systemctl nginx reload
fi

# Run me every 1 minute with cron

Optionally; if you're feeling up to it you can put the copy and sync commands in nginx.service's ExecStart with some && magic so they always happen together. You can also && a sort of 'destructor function' which does a final sync and frees /dev/shm/nginx on ExecStop. This would replace step (3) and (4)

Alternative to cron; you can have a script running a loop in the background with a wait duration. If you do this, you can pass LastUpdateTime back and forth between the two scripts for greater accuracy as LastUpdateTime+GracePeriod is more reliable. With this, I would still use cron to periodically make sure the loop is still running

For reference, on my CentOS 7 images, nginx.service is at /usr/lib/systemd/system/nginx.service

like image 40
Garet Claborn Avatar answered Sep 25 '22 16:09

Garet Claborn