A 504 error means nginx has waited too long for a response and has timed out. There might be multiple reasons for the problem. Possible fixes include: Increasing the nginx proxy_read_timeout default of five minutes to be longer, for example, to 10 minutes.
proxy-connect-timeout : this defines the timeout for establishing a connection with a proxied server. The default value is 60 seconds, and the timeout typically cannot exceed 75 seconds.
Nginx is an open source web server that can also serve as a reverse proxy. Apart from being used to host websites, it's also one of the most widely used reverse proxy and load balancing solutions.
Probably can add a few more line to increase the timeout period to upstream. The examples below sets the timeout to 300 seconds :
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
Increasing the timeout will not likely solve your issue since, as you say, the actual target web server is responding just fine.
I had this same issue and I found it had to do with not using a keep-alive on the connection. I can't actually answer why this is but, in clearing the connection header I solved this issue and the request was proxied just fine:
server {
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://localhost:5000;
}
}
Have a look at this posts which explains it in more detail: nginx close upstream connection after request Keep-alive header clarification http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
user2540984, as well as many others have pointed out that you can try increasing your timeout settings. I myself faced a similar issue to this one and tried to change my timeout settings in the /etc/nginx/nginx.conf file, as almost everyone in these threads suggest. This, however, did not help me a single bit; there was no apparent change in NGINX' timeout settings. After many hours of searching, I finally managed to solve my issue.
The solution lies in this forum thread, and what it says is that you should put your timeout settings in /etc/nginx/conf.d/timeout.conf (and if this file doesn't exist, you should create it). I used the same settings as suggested in the thread:
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
This might not be the solution to your particular problem, but if anyone else notices that the timeout changes in /etc/nginx/nginx.conf don't do anything, I hope this answer helps!
If you want to increase or add time limit to all sites then you can add below lines to the
nginx.conf
file.
Add below lines to the http
section of /usr/local/etc/nginx/nginx.conf
or /etc/nginx/nginx.conf
file.
fastcgi_read_timeout 600;
proxy_read_timeout 600;
If the above lines doesn't exist in conf
file then add them, otherwise increase fastcgi_read_timeout
and proxy_read_timeout
to make sure that nginx and php-fpm did not timeout.
To increase time limit for only one site then you can edit in vim
/etc/nginx/sites-available/example.com
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_read_timeout 300;
}
and after adding these lines in nginx.conf
, then don't forget to restart nginx.
service php7-fpm reload
service nginx reload
or, if you're using valet then simply type valet restart
.
You can also face this situation if your upstream server uses a domain name, and its IP address changes (e.g.: your upstream points to an AWS Elastic Load Balancer)
The problem is that nginx will resolve the IP address once, and keep it cached for subsequent requests until the configuration is reloaded.
You can tell nginx to use a name server to re-resolve the domain once the cached entry expires:
location /mylocation {
# use google dns to resolve host after IP cached expires
resolver 8.8.8.8;
set $upstream_endpoint http://your.backend.server/;
proxy_pass $upstream_endpoint;
}
The docs on proxy_pass explain why this trick works:
Parameter value can contain variables. In this case, if an address is specified as a domain name, the name is searched among the described server groups, and, if not found, is determined using a resolver.
Kudos to "Nginx with dynamic upstreams" (tenzer.dk) for the detailed explanation, which also contains some relevant information on a caveat of this approach regarding forwarded URIs.
Had the same problem. Turned out it was caused by iptables connection tracking on the upstream server. After removing --state NEW,ESTABLISHED,RELATED
from the firewall script and flushing with conntrack -F
the problem was gone.
NGINX itself may not be the root cause.
IF "minimum ports per VM instance" set on the NAT Gateway -- which stand between your NGINX instance & the proxy_pass
destination -- is too small for the number of concurrent requests, it has to be increased.
Solution: Increase the available number of ports per VM on NAT Gateway.
Context In my case, on Google Cloud, a reverse proxy NGINX was placed inside a subnet, with a NAT Gateway. The NGINX instance was redirecting requests to a domain associated with our backend API (upstream) through the NAT Gateway.
This documentation from GCP will help you understand how NAT is relevant to the NGINX 504 timeout.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With