I have implemented an API inside a docker container and I want to deploy this container to a remote ubuntu server. How can I do that exactly? My API uses many resources and I have used MLDB framework to implement it. I want to deploy the container containing the API on this remote ubuntu server. So far, I have found many guides to deploy the API on AWS and DigitalOcean, but since I already have access to a remote ubuntu server than I don't need those right? So how can I deploy my container in order for someone else to be able to test my API? If there is a better way to deploy my API (hopefully for free or with cheap cost) then please let me know.
Thanks in advance.
Setting Up The Remote Host Remote access requires a TCP socket. Run dockerd (the Docker daemon executable) with the -H flag to define the sockets you want to bind to. This command will bind Docker to the default Unix socket and port 2375 on your machine's loopback address.
A remote Docker host is a machine, inside or outside our local network which is running a Docker Engine and has ports exposed for querying the Engine API. The sample application can be deployed on a remote host in several ways.
You can run both Linux and Windows programs and executables in Docker containers. The Docker platform runs natively on Linux (on x86-64, ARM and many other CPU architectures) and on Windows (x86-64). Docker Inc. builds products that let you build and run containers on Linux, Windows and macOS.
To install Docker Engine, you need the 64-bit version of one of these Ubuntu versions: Ubuntu Jammy 22.04 (LTS) Ubuntu Impish 21.10. Ubuntu Focal 20.04 (LTS)
Since the release of Docker 18.09.0 this has got a whole lot easier. This release added support for the ssh
protocol to the DOCKER_HOST
environment variable and the -H
argument to docker ...
commands respectively.
First of all, you'll need SSH access to the target machine (which you'll probably need with any approach).
Then, either:
# Re-direct to remote environment. export DOCKER_HOST="ssh://my-user@remote-host" # Run a container. To prove that we are on remote-host, this will print its hostname. docker run --rm --net host busybox hostname -f # All docker commands here will be run on remote-host. # Switch back to your local environment. unset DOCKER_HOST
Or, if you prefer, all in one go for one command only:
docker -H "ssh://my-user@remote-host" run --rm --net host busybox hostname -f
Note that this is not yet supported in docker-compose
v.1.23.1 (latest version as of writing) and below. However, it will be part of the next release.
Setup passwordless SSH on the target machine
Run the following command to remotely manage Docker on the target VM (also installs Docker if needed):
docker-machine create --driver generic --generic-ip-address=10.123.2.74 --generic-ssh-user=docker --generic-ssh-key ~/.ssh/id_rsa some_name
You can find more information about the generic driver here.
eval $(docker-machine env some_name)
docker ps
Now you can run your docker containers exactly as you would locally.
PS - If you need to remotely manage a docker instance running on Windows through the Docker Toolbox things get a little complicated. (you need to solve network access to your needed ports in the docker linux VM (ssh, docker engine, container ports) either through VirtualBox's bridged network adapter or through port forwarding; also solve windows firewall issues)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With