Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker Container Virtual Host SSL Configuration

I have the following setup and working properly (I am on Docker 1.6):

enter image description here

One Docker container acts as the virtual host proxy for the other web applications running in individual Docker containers. (I should add that I am not a whiz at configuring servers, or networks for that matter.)

I have been trying to add SSL to the setup, but with little success. Each container mounts the file directory on the host for the certificates. For example, to run a container once I use the following:

docker run -d -P --name build \
    -v /home/applications/src/ssl-cert:/etc/ssl/certs \
    -e "DBL=mysql:dbname=build;host=192.168.0.1;port=3306" \
    -e "DB_USER=foo" -e "DB_PASS=bar" \
    --link mysql56:mysql \
    --add-host dockerhost:`/sbin/ip addr | grep 'eth0' | grep 'inet' | cut -d'/' -f1 | awk '{print $2}'` \
    -p 8001:80 -p 4431:443 \
     repos/build:latest

If I attempt to connect to https://build.example.com I get certificate errors and cannot connect. The container's Apache configuration has the appropriate configuration in default-ssl.conf for the certificate files (which works if this is a stand-alone instance):

<VirtualHost _default_:443>
    ServerAdmin webmaster@localhost

    DocumentRoot /var/www/html/

    # Enable/Disable SSL for this virtual host.
    SSLEngine on

    SSLProtocol all -SSLv2 -SSLv3
    SSLHonorCipherOrder On
    SSLCipherSuite ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS

    SSLCertificateFile /etc/ssl/certs/build.crt
    SSLCertificateKeyFile /etc/ssl/certs/build.key
    SSLCACertificateFile /etc/ssl/certs/digicert/digicertca.crt

    #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire
    <FilesMatch "\.(cgi|shtml|phtml|php)$">
        SSLOptions +StdEnvVars
    </FilesMatch>
    <Directory /usr/lib/cgi-bin>
        SSLOptions +StdEnvVars
    </Directory>

    BrowserMatch "MSIE [2-6]" \
        nokeepalive ssl-unclean-shutdown \
        downgrade-1.0 force-response-1.0
    # MSIE 7 and newer should be able to use keepalive
    BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown

</VirtualHost>

Then I attempt to run the following for the proxy container:

docker run -it -d -P --name apache_proxy \
    -v /home/applications/src/ssl-cert:/etc/ssl/certs \
    -p 8000:80 -p 443:443 \
    repos/apache-proxy:latest

This container also contains the same default-ssl.conf.

I have tried running this in several different configurations:

  • running the SSL config in the Apache proxy container only
  • running the SSL config in the build application container only
  • running the SSL config in both containers

If feel as if I am missing something obvious, but cannot put a finger on what it would be. Is there something I am missing when it comes to running SSL in a configuration like this?

like image 219
Jay Blanchard Avatar asked Jan 03 '23 05:01

Jay Blanchard


1 Answers

When we want to add SSL to hosts behind a reverse proxy, we can configure the hosts three ways:

  • Edge: The reverse proxy decrypts incoming HTTPS traffic and communicates with the backend servers over plain-text HTTP.
  • Pass-through: The backend servers decrypt all traffic and the reverse proxy simply forwards HTTPS requests to them.
  • Hybrid: The reverse proxy decrypts HTTPS traffic and then re-encrypts traffic bound for the backend servers.

The first option is the easiest to set up—we only need to install certificates and configure SSL on the reverse proxy. The second, "pass-through" approach enables the backend servers to manage their SSL configurations independently, but the reverse proxy is now "blind" because it cannot read the encrypted traffic, which we may want to do for (example) logging. We'd use the third, hybrid configuration when the proxy must read traffic but we also do not trust the network between the proxy and the backend servers.

Based on the information in the question, the first option seems the most appropriate because we trust the internal Docker network between the reverse proxy and the backend servers. We can remove the SSL configuration from the backend servers and forward requests from the reverse proxy to their standard HTTP ports.

This setup requires two additional components:

  • Name-based virtual hosts configured on the proxy to forward requests for each backend service.
  • A single certificate that secures all of the backend domain names (as multiple subjects or as a wildcard like *.example.com).

Here's an example virtual host configuration section that we can build on:

<VirtualHost *:443>
    ServerName build.example.com
    ProxyRequests Off
    ProxyPreserveHost On 
    ProxyPass / http://build:8001/ 
    ProxyPassReverse / http://build:8001/
</VirtualHost>
<VirtualHost *:443>
    ServerName cicd.example.com
    ProxyRequests Off
    ProxyPreserveHost On 
    ProxyPass / http://cicd:8002/ 
    ProxyPassReverse / http://cicd:8002/
</VirtualHost>

...and remember to configure the SSL directives in the default virtual host block. If we link the containers or run them on the same Docker network, we can use their container names as hostnames in our httpd.conf like shown above.


Speaking of networking, the question seems to indicate that we need to take a closer look at Docker networking because I don't see any hints that indicate that the containers were configured to talk to each other (the 503 response status supports this assumption). The reverse proxy container must forward requests to each of the backend containers, but it cannot do so unless we link the containers (deprecated) or create an internal, user-defined network between the containers:

$ docker network create build_network
$ docker run --network build_network --name apache_proxy ...
$ docker run --network build_network --name build ...
$ docker run --network build_network --name cicd ...

When we run containers on the same user-defined network, they can resolve the IP addresses of other containers by container name through Docker's internal DNS resolver (or by an alternate hostname if we specify the --hostname argument to docker run). Note also that because each container represents discrete host, we don't need to increment their port numbers (8001, 8002, etc.). We can use port 80 to serve HTTP traffic from each container on the internal network.

            +────────────────── Docker Host ─────────────────+
            │  +────────────── build_network ─────────────+  │
            │  │                                          │  │
  Client ────────────── apache_proxy (:443 → :443)        │  │
            │  │           ├── build (:80)                │  │
            │  │           └── cicd  (:80)                │  │
            │  +──────────────────────────────────────────+  │
            +────────────────────────────────────────────────+
like image 104
Cy Rossignol Avatar answered Jan 13 '23 11:01

Cy Rossignol