Running numerous dockers right now on a new build for a homelab server and trying to make sure everything is locked down and secure. I use the server for a variety of things, both requiring access from the outside world (nextcloud) and things that I will only access from my internal network (plex). Of course the server is behind a router that limits open ports but looking for additional security - I would like to restrict those dockers that I want to only access via internal network, to 192.168.0.0/24. That way, if somehow a port became open on my router, it would not be exposed (am I being to paranoid?).
Currently docker-compose files are exposing ports via:
....
ports:
- 8989:8989
....
This is of course works fine but is accessible to the world should I open the port on my router. I know i can bind to localhost via
....
ports:
- 127.0.0.1:8989:8989
....
But that doesn't help me when I'm trying to access the docker from my internal network. I've read numerous articles regarding docker networks and various flags and also read about possibility iptables solution.
Any guidance is much appreciated.
Thanks,
You can expose a port through your Dockerfile or use --expose and then publish it with the -P flag. This will bind the exposed port to your Docker host on a random port (verified by running docker container ls ). You can expose a port through your Dockerfile or use --expose and then publish it with the -p 80:80 flag.
Published ports To make a port available to services outside of Docker, or to Docker containers which are not connected to the container's network, use the --publish or -p flag. This creates a firewall rule which maps a container port to a port on the Docker host to the outside world. Here are some examples.
Some standard ports are 80 for webservers, 443 for webservers running over encryption (TLS/SSL), and 25 for mail. So when you want to allow applications to connect to your container, you need to expose one or more ports to the outside world.
Use --network="host" in your docker run command, then 127.0. 0.1 in your docker container will point to your docker host. Note: This mode only works on Docker for Linux, per the documentation.
Simply do not declare any ports in docker-compose, they are automatically visible between containers.
I use an elasticsearch container in this way and a separate kibana can connect to it by the server name declated on the yml.
if somehow a port became open on my router, it would not be exposed
Using this procedure the ports are never visible outside the docker environment (i.e. outside == in your local network).
If your concern is that ports are published in your LAN when doing the procedure I told you, they are not.
you are actually very close with
ports:
- 127.0.0.1:8989:8989
as with this it is accessible locally on your server, fun enough, your bind to localhost trick is exactly what i was looking for my own setup xD
from this point there are actually a couple of ways to set it up so that you can access it on your local network.
the first one is the one i'm using on my own setup: ssh forwarding
you can, if you haven't already, set up an .ssh/config file to forward localhost ports to your computer. taking your example into account the syntax is as follows
Host some-hostname
HostName 192.168.x.x
User user-of-server
LocalForward 8989 127.0.0.1:8989
some-hostname
is a shortname you can choose, user-of-server
is the actual user you set up to log in with, 192.168.x.x
is the actual local ip address of your server and you can also include a IdentityFile /path/to/ssh/key
. with this you can run ssh some-hostname
to ssh into your server from any computer on your local network and your server will be available at localhost:8989
on that specific computer
the second is a reverse proxy like nginx, this too can be run in a docker container and you could bind it to any port like say for example to 6443 and you can mount its config file into the container with
volumes:
- 'config:/etc/nginx/conf.d'
ports:
- 6443:443
volumes:
config:
driver: local
driver_opts:
type: none;
o: bind
device: "./config"
the in the ./config/defaul.conf you could set up something like
server {
listen 443 ssl http2;
server_name 192.168.x.x;
ssl_certificate /etc/letsencrypt/signed_chain.crt;
ssl_certificate_key /etc/letencrypt/domain.key
include /etc/nginx/includes/ssl.conf
location /{
### force timeouts if one of backend is died ##
### such died, many backend, very timeouts ##
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
### Set headers ####
proxy_set_header Accept-Encoding "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $host:server_port;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto $scheme;
add_header Front-End-Https on;
proxy_buffering off;
proxy_pass http://127.0.0.1:8989
}
then it should be available on and only on 192.168.x.x:6443
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With