Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to configure Docker port mapping to use Nginx as an upstream proxy?

Tags:

docker

nginx

Update II

It's now July 16th, 2015 and things have changed again. I've discovered this automagical container from Jason Wilder: https://github.com/jwilder/nginx-proxy and it solves this problem in about as long as it takes to docker run the container. This is now the solution I'm using to solve this problem.

Update

It's now July of 2015 and things have change drastically with regards to networking Docker containers. There are now many different offerings that solve this problem (in a variety of ways).

You should use this post to gain a basic understanding of the docker --link approach to service discovery, which is about as basic as it gets, works very well, and actually requires less fancy-dancing than most of the other solutions. It is limited in that it's quite difficult to network containers on separate hosts in any given cluster, and containers cannot be restarted once networked, but does offer a quick and relatively easy way to network containers on the same host. It's a good way to get an idea of what the software you'll likely be using to solve this problem is actually doing under the hood.

Additionally, you'll probably want to also check out Docker's nascent network, Hashicorp's consul, Weaveworks weave, Jeff Lindsay's progrium/consul & gliderlabs/registrator, and Google's Kubernetes.

There's also the CoreOS offerings that utilize etcd, fleet, and flannel.

And if you really want to have a party you can spin up a cluster to run Mesosphere, or Deis, or Flynn.

If you're new to networking (like me) then you should get out your reading glasses, pop "Paint The Sky With Stars — The Best of Enya" on the Wi-Hi-Fi, and crack a beer — it's going to be a while before you really understand exactly what it is you're trying to do. Hint: You're trying to implement a Service Discovery Layer in your Cluster Control Plane. It's a very nice way to spend a Saturday night.

It's a lot of fun, but I wish I'd taken the time to educate myself better about networking in general before diving right in. I eventually found a couple posts from the benevolent Digital Ocean Tutorial gods: Introduction to Networking Terminology and Understanding ... Networking. I suggest reading those a few times first before diving in.

Have fun!

Original Post

I can't seem to grasp port mapping for Docker containers. Specifically how to pass requests from Nginx to another container, listening on another port, on the same server.

I've got a Dockerfile for an Nginx container like so:

FROM ubuntu:14.04 MAINTAINER Me <[email protected]>  RUN apt-get update && apt-get install -y htop git nginx  ADD sites-enabled/api.myapp.com /etc/nginx/sites-enabled/api.myapp.com ADD sites-enabled/app.myapp.com /etc/nginx/sites-enabled/app.myapp.com ADD nginx.conf /etc/nginx/nginx.conf  RUN echo "daemon off;" >> /etc/nginx/nginx.conf  EXPOSE 80 443  CMD ["service", "nginx", "start"] 



And then the api.myapp.com config file looks like so:

upstream api_upstream{      server 0.0.0.0:3333;  }   server {      listen 80;     server_name api.myapp.com;     return 301 https://api.myapp.com/$request_uri;  }   server {      listen 443;     server_name api.mypp.com;          location / {          proxy_http_version 1.1;         proxy_set_header Upgrade $http_upgrade;         proxy_set_header Connection 'upgrade';         proxy_set_header Host $host;         proxy_set_header X-Real-IP $remote_addr;         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;         proxy_set_header X-Forwarded-Proto $scheme;         proxy_cache_bypass $http_upgrade;         proxy_pass http://api_upstream;          }  } 

And then another for app.myapp.com as well.

And then I run:

sudo docker run -p 80:80 -p 443:443 -d --name Nginx myusername/nginx 

And it all stands up just fine, but the requests are not getting passed-through to the other containers/ports. And when I ssh into the Nginx container and inspect the logs I see no errors.

Any help?

like image 983
AJB Avatar asked Jan 13 '15 00:01

AJB


People also ask

What is Docker nginx proxy?

nginx-proxy sets up a container running nginx and docker-gen. docker-gen generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped. See Automated Nginx Reverse Proxy for Docker for why you might want to use this.

How do I use Docker with port mapping?

Map TCP port 80 in the container to port 8080 on the Docker host for connections to host IP 192.168. 1.100. Map UDP port 80 in the container to port 8080 on the Docker host. Map TCP port 80 in the container to TCP port 8080 on the Docker host, and map UDP port 80 in the container to UDP port 8080 on the Docker host.


1 Answers

@T0xicCode's answer is correct, but I thought I would expand on the details since it actually took me about 20 hours to finally get a working solution implemented.

If you're looking to run Nginx in its own container and use it as a reverse proxy to load balance multiple applications on the same server instance then the steps you need to follow are as such:

Link Your Containers

When you docker run your containers, typically by inputting a shell script into User Data, you can declare links to any other running containers. This means that you need to start your containers up in order and only the latter containers can link to the former ones. Like so:

#!/bin/bash sudo docker run -p 3000:3000 --name API mydockerhub/api sudo docker run -p 3001:3001 --link API:API --name App mydockerhub/app sudo docker run -p 80:80 -p 443:443 --link API:API --link App:App --name Nginx mydockerhub/nginx 

So in this example, the API container isn't linked to any others, but the App container is linked to API and Nginx is linked to both API and App.

The result of this is changes to the env vars and the /etc/hosts files that reside within the API and App containers. The results look like so:

/etc/hosts

Running cat /etc/hosts within your Nginx container will produce the following:

172.17.0.5  0fd9a40ab5ec 127.0.0.1   localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.17.0.3  App 172.17.0.2  API 



ENV Vars

Running env within your Nginx container will produce the following:

API_PORT=tcp://172.17.0.2:3000 API_PORT_3000_TCP_PROTO=tcp API_PORT_3000_TCP_PORT=3000 API_PORT_3000_TCP_ADDR=172.17.0.2  APP_PORT=tcp://172.17.0.3:3001 APP_PORT_3001_TCP_PROTO=tcp APP_PORT_3001_TCP_PORT=3001 APP_PORT_3001_TCP_ADDR=172.17.0.3 

I've truncated many of the actual vars, but the above are the key values you need to proxy traffic to your containers.

To obtain a shell to run the above commands within a running container, use the following:

sudo docker exec -i -t Nginx bash

You can see that you now have both /etc/hosts file entries and env vars that contain the local IP address for any of the containers that were linked. So far as I can tell, this is all that happens when you run containers with link options declared. But you can now use this information to configure nginx within your Nginx container.



Configuring Nginx

This is where it gets a little tricky, and there's a couple of options. You can choose to configure your sites to point to an entry in the /etc/hosts file that docker created, or you can utilize the ENV vars and run a string replacement (I used sed) on your nginx.conf and any other conf files that may be in your /etc/nginx/sites-enabled folder to insert the IP values.



OPTION A: Configure Nginx Using ENV Vars

This is the option that I went with because I couldn't get the /etc/hosts file option to work. I'll be trying Option B soon enough and update this post with any findings.

The key difference between this option and using the /etc/hosts file option is how you write your Dockerfile to use a shell script as the CMD argument, which in turn handles the string replacement to copy the IP values from ENV to your conf file(s).

Here's the set of configuration files I ended up with:

Dockerfile

FROM ubuntu:14.04 MAINTAINER Your Name <[email protected]>  RUN apt-get update && apt-get install -y nano htop git nginx  ADD nginx.conf /etc/nginx/nginx.conf ADD api.myapp.conf /etc/nginx/sites-enabled/api.myapp.conf ADD app.myapp.conf /etc/nginx/sites-enabled/app.myapp.conf ADD Nginx-Startup.sh /etc/nginx/Nginx-Startup.sh  EXPOSE 80 443  CMD ["/bin/bash","/etc/nginx/Nginx-Startup.sh"] 

nginx.conf

daemon off; user www-data; pid /var/run/nginx.pid; worker_processes 1;   events {     worker_connections 1024; }   http {      # Basic Settings      sendfile on;     tcp_nopush on;     tcp_nodelay on;     keepalive_timeout 33;     types_hash_max_size 2048;      server_tokens off;     server_names_hash_bucket_size 64;      include /etc/nginx/mime.types;     default_type application/octet-stream;       # Logging Settings     access_log /var/log/nginx/access.log;     error_log /var/log/nginx/error.log;       # Gzip Settings  gzip on;     gzip_vary on;     gzip_proxied any;     gzip_comp_level 3;     gzip_buffers 16 8k;     gzip_http_version 1.1;     gzip_types text/plain text/xml text/css application/x-javascript application/json;     gzip_disable "MSIE [1-6]\.(?!.*SV1)";      # Virtual Host Configs       include /etc/nginx/sites-enabled/*;      # Error Page Config     #error_page 403 404 500 502 /srv/Splash;   } 

NOTE: It's important to include daemon off; in your nginx.conf file to ensure that your container doesn't exit immediately after launching.

api.myapp.conf

upstream api_upstream{     server APP_IP:3000; }  server {     listen 80;     server_name api.myapp.com;     return 301 https://api.myapp.com/$request_uri; }  server {     listen 443;     server_name api.myapp.com;      location / {         proxy_http_version 1.1;         proxy_set_header Upgrade $http_upgrade;         proxy_set_header Connection 'upgrade';         proxy_set_header Host $host;         proxy_set_header X-Real-IP $remote_addr;         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;         proxy_set_header X-Forwarded-Proto $scheme;         proxy_cache_bypass $http_upgrade;         proxy_pass http://api_upstream;     }  } 

Nginx-Startup.sh

#!/bin/bash sed -i 's/APP_IP/'"$API_PORT_3000_TCP_ADDR"'/g' /etc/nginx/sites-enabled/api.myapp.com sed -i 's/APP_IP/'"$APP_PORT_3001_TCP_ADDR"'/g' /etc/nginx/sites-enabled/app.myapp.com  service nginx start 

I'll leave it up to you to do your homework about most of the contents of nginx.conf and api.myapp.conf.

The magic happens in Nginx-Startup.sh where we use sed to do string replacement on the APP_IP placeholder that we've written into the upstream block of our api.myapp.conf and app.myapp.conf files.

This ask.ubuntu.com question explains it very nicely: Find and replace text within a file using commands

GOTCHA On OSX, sed handles options differently, the -i flag specifically. On Ubuntu, the -i flag will handle the replacement 'in place'; it will open the file, change the text, and then 'save over' the same file. On OSX, the -i flag requires the file extension you'd like the resulting file to have. If you're working with a file that has no extension you must input '' as the value for the -i flag.

GOTCHA To use ENV vars within the regex that sed uses to find the string you want to replace you need to wrap the var within double-quotes. So the correct, albeit wonky-looking, syntax is as above.

So docker has launched our container and triggered the Nginx-Startup.sh script to run, which has used sed to change the value APP_IP to the corresponding ENV variable we provided in the sed command. We now have conf files within our /etc/nginx/sites-enabled directory that have the IP addresses from the ENV vars that docker set when starting up the container. Within your api.myapp.conf file you'll see the upstream block has changed to this:

upstream api_upstream{     server 172.0.0.2:3000; } 

The IP address you see may be different, but I've noticed that it's usually 172.0.0.x.

You should now have everything routing appropriately.

GOTCHA You cannot restart/rerun any containers once you've run the initial instance launch. Docker provides each container with a new IP upon launch and does not seem to re-use any that its used before. So api.myapp.com will get 172.0.0.2 the first time, but then get 172.0.0.4 the next time. But Nginx will have already set the first IP into its conf files, or in its /etc/hosts file, so it won't be able to determine the new IP for api.myapp.com. The solution to this is likely to use CoreOS and its etcd service which, in my limited understanding, acts like a shared ENV for all machines registered into the same CoreOS cluster. This is the next toy I'm going to play with setting up.



OPTION B: Use /etc/hosts File Entries

This should be the quicker, easier way of doing this, but I couldn't get it to work. Ostensibly you just input the value of the /etc/hosts entry into your api.myapp.conf and app.myapp.conf files, but I couldn't get this method to work.

UPDATE: See @Wes Tod's answer for instructions on how to make this method work.

Here's the attempt that I made in api.myapp.conf:

upstream api_upstream{     server API:3000; } 

Considering that there's an entry in my /etc/hosts file like so: 172.0.0.2 API I figured it would just pull in the value, but it doesn't seem to be.

I also had a couple of ancillary issues with my Elastic Load Balancer sourcing from all AZ's so that may have been the issue when I tried this route. Instead I had to learn how to handle replacing strings in Linux, so that was fun. I'll give this a try in a while and see how it goes.

like image 196
AJB Avatar answered Sep 20 '22 03:09

AJB