I have a script which uses the shebang #!/usr/bin/env python
. It works great on machines where Python 3 is the only version available, but on the machines which have both Python 2 and Python 3, it runs the script with Python 2.
If I modify the shebang to be #!/usr/bin/env python3
, it would work on the machines with Python 2 and Python 3, but on the machines which have only Python 3, it would fail with “No such file or directory” error.
One solution is to create an alias alias python=python3
.
Are there other solutions to have a same shebang working uniformly on every machine?
Unfortunately, there is no universally working way of doing this that would work across any and all up front unknown Linux hosts and you are largely left at the mercy of distro maintainers and local host configuration.
alias
won't help, because interpreter specified by #!
handled by the kernel and /usr/bin/env
it will exec in this case does not know about aliases of your shell.
When using env
, you could make sure that the name following env
is first found and means what you want it to mean by:
mkdir /tmp/bin
ln -s /usr/bin/python /tmp/bin/python3
PATH="/tmp/bin:${PATH}" ./myscript.py
But none of this is really great and ultimately what you've asked for.
Your interpreter (this is harder than it may sound though, interpreter resolution code is quite simple; where to put it, how to call it for kernel to find and use it) could also be a simple shell script that tries to figure it out that you pack with your python code, but any option you look at isn't really great I am afraid.
There is a PEP-394 for that which suggested / expected on U*X-like system:
python
for python2python3
for python3But it recognizes this has never been entirely consistently applied... and also not as useful in 2020:
However, these recommendations implicitly assumed that Python 2 would always be available. As Python 2 is nearing its end of life in 2020 (PEP 373, PEP 404), distributions are making Python 2 optional or removing it entirely. This means either removing the python command or switching it to invoke Python 3. Some distributors also decided that their users were better served by ignoring the PEP's original recommendations, and provided system administrators with the freedom to configure their systems based on the needs of their particular environment.
TL;DR unfortunately there is no way that universally works and compensates for decisions of various distro and even individual host maintainers. :(
I would most likely opt to stick with #!/usr/bin/env python3
(which has so far been the recommended naming) and add a README
that explains the prerequisites and how to setup the host just to be sure.
For the sake of completeness I should add, the PEP does make a recommendation in this regard: setup and use virtual environment or use a (third party) environment manager. However the way I read the question: "Portable interpreter specification that does not make any assumptions about nor poses any (additional) requirements to target host configuration", this would then not fit the bill and would not mean a substantial improvement over saying: make sure you have python3
executable in the search path and create a symlink if not.
#!/usr/bin/env python3
is the most correct general solution. On systems where Python 3 is installed, this should work, regardless of whether Python 2 is installed or not.
That said, there’s still no guarantee that the right version of Python 3 is installed, and hard-coding, say, python3.7
isn’t viable since most users won’t have several versions installed.
Luckily there’s a better solution: If you distribute your application as a Python package with Setuptools and specify entry_points
for your binary/binaries, the installation process will pick the correct interpreter and hard-code its path into your application scripts. This will work regardless of environment, as long as the installer (e.g. pip
) can find Python at all.
As a very simple example, say your project example
has the following folder structure:
example
├── example
│ ╰── __init__.py
├── pyproject.toml
╰── setup.cfg
And let’s assume your __init__.py
has a function main
that we want to use as an entry point for an executable script.
Then pyproject.toml
would look like this:1
[build-system]
requires = ["setuptools", "wheel"]
And setup.cfg
looks as follows:1,2
[metadata]
name = example
version = 0.0.1
[options]
packages = find:
[options.entry_points]
console_scripts =
example-bin = example:main
Now you can build your project using e.g. pip3 bdist path-to-project
, which will generate the Wheel installation bundle. When you install that (either after uploading it to PyPI or locally (via pip3 install wheel-filename.whl
), pip3 will install the binary example-bin
.
example-bin
will launch your package’s entry point function, and will work regardless of what the Python 3 binary is called on your system, because pip3 install …
creates this file with an absolute path in the shebang line. For example, on my system the first line of that file is
#!/usr/local/opt/python/bin/python3.7
1 Rather than use pyproject.toml
and setup.cfg
, the same works using the legacy setup.py
file — but using the above is simpler.
2 Note that, at the time of writing, there’s a typo in the Setuptools quickstart: instead of [entry_point]
, it should read [entry_points]
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With