I want to install Python, Pip and virtualenv in a Docker container which is Ubuntu, I create the image through Dockerfile:
FROM ubuntu:16.04
RUN apt-get update -y
RUN apt-get
RUN apt-get python3 -y
RUN apt-get install python3-pip -y
RUN pip install virtualenv
...
When it went here, it came out "/bin/sh: 1: pip: not found", but there's no exception showed in the installation process, is this mean I didn't install pip right? Or should I do anything else before I use pip order?
Then I changed the Dockerfile like this:
...
RUN apt-get python3 -y
RUN apt-get install python3-pip -y
RUN apt-get install python3-virtualenv -y
RUN virtualenv --no-stie-packages -p /path/python3 py3env
...
but it still saying /bin/sh: 1: virtualenv: not found, here is it
I also installed git, git clone order ran correctly, can be used. Where am I wrong, or how should I do?
On Debian-based platforms, including Ubuntu, the command installed by python3-pip
is called pip3
in order for it to peacefully coexist with any system-installed Python 2 and its pip
.
Somewhat similarly, the virtualenv
command is not installed by the package python3-virtualenv
; to get that, you need apt-get install -y virtualenv
.
Note that venv
is included in the Python 3 standard library, so you don't really need to install anything at all.
python3 -m venv newenv
Why would you want a virtualenv inside Docker anyway, though? (There are situations where it makes sense but in the vast majority of cases, you want the Docker container to be as simple as possible, which means, install everything as root, and rebuild the whole container if something needs to be updated.)
As an aside, you generally want to minimize the number of RUN
statements. Making many layers while debugging is perhaps defensible, but layers which do nothing are definitely just wasteful. Perhaps also discover that apt-get
can install more than one package at a time.
RUN apt-get update -y && \
apt-get install -y python3 python3-pip && \
...
The &&
causes the entire RUN
sequence to fail as soon as one of the commands fails.
What sense to use virtual environment inside container? Virtualenv very helpful for local development, it allows you to use different versions of python interpreter and packages on a single machine. But in your docker container should be only one process (actually container it is process), and you can install all your requirements globally.
But if you really have strong reasons for that, you probably must use "python3 way" to create virtual env.
So your Dockerfile should look something like that:
FROM ubuntu:16.04
RUN apt-get update -y \
&& apt install python3 -y \
&& apt install python3-pip -y \
&& apt install python3-venv -y \
&& python3 -m venv venv
ENTRYPOINT bin/bash
You can build it with command
docker build -t ubuntu-python .
And run with
docker run --rm -it ubuntu-python
In container shell you could activate venv with command
source venv/bin/activate
then run python interpreter and check that it was run from venv:
>>> import sys
>>> sys.executable
it should print /venv/bin/python
I have no idea how to run container with preactivated (I don't know if this word really exists) virtual environment and I still think you actually don't need to use virtual environment in your container.
Also you'd better try ready python-images for docker, e.g. light alpine images instead of extending basic ubuntu image.
Excuse my terrible russian-english, I hope you'll understand my answer :)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With