I am trying to reverse proxies a ruby project on GCP with NGINX, my /etc/nginx/sites-available/default
file looks like this
server {
large_client_header_buffers 4 16k;
listen 80 default_server;
#server_name my-devops-staging.com
listen [::]:80 default_server;
#return 301 https://$host$request_uri;
# SSL configuration
#
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /etc/nginx/certificate.crt;
ssl_certificate_key /etc/nginx/key.key;
ssl off;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
location / {
proxy_set_header Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fix the “It appears that your reverse proxy set up is broken" error.
proxy_pass http://localhost:8080;
# proxy_read_timeout 300;
proxy_read_timeout 9000;
proxy_request_buffering off;
proxy_buffering off;
proxy_redirect off;
}
}
What could I be doing wrong, whenever I run
$ sudo service nginx restart
I get these errors in the error.log
2018/03/27 08:32:50 [error] 2959#2959: *64 upstream prematurely closed connection while reading response header from upstream, client: 130.211.2.175, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "my-devops-staging.com"
2018/03/27 08:32:53 [error] 2959#2959: *66 upstream prematurely closed connection while reading response header from upstream, client 130.211.2.87, server: , request: "GET / HTTP/1.1", upstream: http://127.0.0.1:8080/", host: "my-devops-staging.com"
Make the communication between your proxy and backend more loyal by adding these params to your proxy Nginx config file:
location / {
proxy_http_version 1.1; # you need to set this in order to use params below.
proxy_temp_file_write_size 64k;
proxy_connect_timeout 10080s;
proxy_send_timeout 10080;
proxy_read_timeout 10080;
proxy_buffer_size 64k;
proxy_buffers 16 32k;
proxy_busy_buffers_size 64k;
proxy_redirect off;
proxy_request_buffering off;
proxy_buffering off;
proxy_pass <whatever_here>;
}
The magic numbers I've took from the production environment that works for us. You may want to consider changing those numbers to fit your environment and number of connections, etc.
I hope this helps.
You are actually timing out in connecting into your Rails application, you can lower that proxy_read_timeout
back to 300 and up your proxy_connect_timeout
to match it, or higher if it keeps happening. Just add those lines in your location \ {...}
block:
location / {
proxy_set_header Host $server_name;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8080;
proxy_read_timeout 300; # Reducing this
proxy_connect_timeout 300; # Adding this
proxy_request_buffering off;
proxy_buffering off;
proxy_redirect off;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With