I'm having a hard time configuring nginx to act as a proxy of a public S3 endpoint. My use case necessitates altering the status code of the S3 response, while preserving the response payload.
The possible status codes returned by S3 include 200 and 403. For my use case, I need to map those status codes to 503.
I have tried the following which does not work:
location ~* ^/.* {
[...]
proxy_intercept_errors on;
error_page 200 =503 $upstream_http_location
}
Nginx outputs the following error:
nginx: [emerg] value "200" must be between 300 and 599 in /etc/nginx/nginx.conf:xx
Here's a more complete snippet:
server {
listen 80;
location ~* ^/.* {
proxy_http_version 1.1;
proxy_method GET;
proxy_pass http://my-s3-bucket-endpoint;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header Connection "";
proxy_set_header Host my-s3-bucket-endpoint;
proxy_set_header Authorization '';
proxy_hide_header x-amz-id-2;
proxy_hide_header x-amz-request-id;
proxy_hide_header Set-Cookie;
proxy_ignore_headers "Set-Cookie";
proxy_cache S3_CACHE;
proxy_cache_valid 200 403 503 1h;
proxy_cache_bypass $http_cache_purge;
add_header X-Cached $upstream_cache_status;
proxy_intercept_errors on;
error_page 200 =503 $upstream_http_location;
}
}
Is it possible to achieve what I need with nginx?
A proxy_pass is usually used when there is an nginx instance that handles many things, and delegates some of those requests to other servers. Some examples are ingress in a Kubernetes cluster that spreads requests among the different microservices that are responsible for the specific locations.
The following steps briefly outlines the process. 1) The client sends an HTTP CONNECT request to the proxy server. 2) The proxy server uses the host and port information in the HTTP CONNECT request to establish a TCP connection with the target server. 3) The proxy server returns an HTTP 200 response to the client.
Passing Headers to Handle Proxied Requests. Apart from proxy_pass, NGINX offers many other directives to handle requests to your server blocks. One of these directives is proxy_set_header, which lets you pass/rewrite headers to handle proxied requests.
Nginx is an open source web server that can also serve as a reverse proxy. Apart from being used to host websites, it's also one of the most widely used reverse proxy and load balancing solutions.
I found a more or less suitable solution. It's a bit hackish but it works.
The key was to set the index document of my S3 bucket to a non-existing filename. This causes requests to / on the S3 bucket endpoint to result in 403.
Since the nginx proxy maps all incoming requests to / on the S3 bucket endpoint, the result is always 403 which the nginx proxy can intercept. From there, the error_page directive tells it to respond by requesting a specific document (in this case error.json) in the S3 bucket endpoint and use 503 as the response status code.
location ~* ^/. {
proxy_intercept_errors on;
error_page 403 =503 /error.json;
}
This solution involves two requests being sent to the S3 bucket endpoint (/, /error.json) but at least caching seems to be enabled for both requests using the configuration in the more complete snippet above.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With