I'm trying to restrict direct access to elasticsearch on port 9200, but allow Nginx to proxy pass to it.
This is my config at the moment:
server {
listen 80;
return 301;
}
server {
listen *:5001;
location / {
auth_basic "Restricted";
auth_basic_user_file /var/data/nginx-elastic/.htpasswd;
proxy_pass http://127.0.0.1:9200;
proxy_read_timeout 90;
}
}
This almost works as I want it to. I can access my server on port 5001 to hit elasticsearch and must enter credentials as expected.
However, I'm still able to hit :9200 and avoid the HTTP authentication, which defeats the point. How can I prevent access to this port, without restricting nginx? I've tried this:
server {
listen *:9200;
return 404;
}
But I get:
nginx: [emerg] bind() to 0.0.0.0:9200 failed (98: Address already in use)
as it conflicts with elasticsearch.
There must be a way to do this! But I can't think of it.
EDIT:
I've edited based on a comment and summarised the question:
I want to lock down < serverip >:9200, and basically only allow access through port 5001 (which is behind HTTP Auth). 5001 should proxy to 127.0.0.1:9200 so that elasticsearch is accessible only through 5001. All other access should 404 (or 301, etc).
A reverse proxy server is a type of proxy server that typically sits behind the firewall in a private network and directs client requests to the appropriate backend server. A reverse proxy provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers.
The listen directive specifies the nginx server ip and port. Requests are routed to nginx which then distributes them to the upstream pool. Based on your diagram, the host running nginx is at 46.137. 123.236 and the 192.168. 11.12 address is the pool member upstream which nginx is routing the requests to.
In Nginx, keepalive is a directive that is utilized for keeping the connection open for a certain number of requests to the server or until the request timeout period has expired.
upstream defines a cluster that you can proxy requests to. It's commonly used for defining either a web server cluster for load balancing, or an app server cluster for routing / load balancing.
add this in your ES config to ensure it only binds to localhost
network.host: 127.0.0.1
http.host: 127.0.0.1
then ES is only accessible from localhost and not the world.
make sure this is really the case with the tools of your OS. e.g. on unix:
$ netstat -an | grep -i 9200
tcp4 0 0 127.0.0.1.9200 *.* LISTEN
in any case I would lock down the machine using the OS firewall to really only allow the ports you want and not only rely on proper binding. why is this important? because ES also runs its cluster communication on another port (9300) and evil doers might just connect there.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With