I have the following code that works great in my local environment. However, when I try to run the same code from a Docker container (via Boot2Docker), I simply cannot get to https://[boot2docker_ip]:4000
I tried updating the target value in the code below with all these options but none of them seemed to do the trick:
target: 'http://localhost:3000',
target: 'http://0.0.0.0:3000',
target: 'http://127.0.0.1:3000',
target: 'http://<boot2docker_ip>:3000',
var fs = require('fs');
require('http-proxy').createProxyServer({
ssl: {
key: fs.readFileSync(__dirname + '/certs/ssl.key', 'utf8'),
cert: fs.readFileSync(__dirname + '/certs/ssl.crt', 'utf8')
},
target: 'http://localhost:3000',
ws: true,
xfwd: true
}).listen(4000);
I am using the node-http-proxy
package from https://github.com/nodejitsu/node-http-proxy
Here is a Git repo to try out this behavior; I have checked in fake SSL for simplicity.
Dockerfile:
FROM readytalk/nodejs
ADD ./src /app
ADD ./ssl-proxy /proxy
COPY ./run.sh /run.sh
RUN chmod +x /run.sh
EXPOSE 3000
EXPOSE 4000
ENTRYPOINT ["/run.sh"]
run.sh:
#!/bin/sh
/nodejs/bin/node /app/main.js; /nodejs/bin/node /proxy/main.js
Copy the Docker container's Nginx config file to your local file system. Add proxy_pass entries that point to your backend origin servers. Copy the config file back into the Nginx Docker container. Reload the Nginx configuration and test the setup.
To configure Docker to work with a proxy you need to add the HTTPS_PROXY / HTTP_PROXY environment variable to the Docker sysconfig file ( /etc/sysconfig/docker ). The Docker repository (Docker Hub) only supports HTTPS.
The docker-proxy operates in userland, and simply receives any packets arriving at the host's specified port, that the kernel hasn't 'dropped' or forwarded, and redirects them to the container's port.
I just had a look at your Dockerfile and especially the run.sh
script that you use. This line is from your run.sh
script:
/nodejs/bin/node /app/main.js; /nodejs/bin/node /proxy/main.js
What's important to know here is that each of these commands start a long-running server process that (theoretically) runs forever. This means that the second process (/proxy/main.js
) will never start because the shell will wait for the first process to finish.
This means that you cannot access your proxy server because it never starts.
Basically there are two solutions to this that I could think of. Please note that the idiomatic "Docker way" is to run one process per container only, though.
I'd recommend running your application and the proxy server in two separate containers. You can link those two containers together:
docker run --name app -p 3000 <your-image> /nodejs/bin/node /app/main.js
docker run --name proxy -l app:app -p 4000:4000 <your-image> /nodejs/bin/node /proxy/main.js
The flag -l app:app
will cause the app
container to be available with the hostname app
in your proxy
container (this is done by creating a /etc/hosts
entry in the container). This means, inside the proxy container, you can then use http://app:3000
to access your upstream application port.
An alternative solution would be to use a process manager tool like Supervisord to manage several long-running processes in your container in parallel. There's a good article on that in the documentation. It basically boils down to the following:
apt-get install supervisor
in Ubuntu)Create a configuration file (typically in /etc/supervisor/conf.d/yourapplication.conf
) in which you configure all services that you need to run:
[supervisord]
nodaemon=true
[program:application]
command=/nodejs/bin/node /app/main.js
[program:proxy]
command=/nodejs/bin/node /proxy/main.js
Then use supervisord
as your start command, for example by using CMD ["/usr/bin/supervisord"]
in your Dockerfile.
In this case, both your processes are running in the same container and you can use http://localhost:3000
to access your upstream application.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With