I am experimenting with putting my Python code into the standard directory structure used for deployment with setup.py
and maybe PyPI. for a Python library called mylib it would be something like this:
mylibsrc/
README.rst
setup.py
bin/
some_script.py
mylib/
__init.py__
foo.py
There's often also a test/
subdirectory but I haven't tried writing unit tests yet. The recommendation to have scripts in a bin/
subdirectory can be found in the official Python packaging documentation.
Of course, the scripts start with code that looks like this:
#!/usr/bin/env python
from mylib.foo import something
something("bar")
This works well when it eventually comes to deploying the script (e.g. to devpi) and then installing it with pip. But if I run the script directly from the source directory, as I would while developing new changes to the library/script, I get this error:
ImportError: No module named 'mylib'
This is true even if the current working directory is the root mylibsrc/
and I ran the script by typing ./bin/some_script.py
. This is because Python starts searching for packages in the directory of the script being run (i.e. from bin/
), not the current working directory.
What is a good, permament way to make it easy to run scripts while developing packages?
Here is a relevant other question (especially comments to the first answer).
The solutions for this that I've found so far fall into three categories, but none of them are ideal:
mylibsrc
to my PYTHONPATH
environment variable. This seems to be the most official (Pythonic?) solution, but means that every time I check out a project I have to remember to manually change my environment before I can run any code in it..
to the start of my PYTHONPATH
environment variable. As I understand it this could have some security problems. This would actually be my favoured trick if I was the only person to use my code, but I'm not, and I don't want to ask others to do this.test/
directory I've seen recommendations that they all (indirectly) include a line of code sys.path.insert(0, os.path.abspath('..'))
(e.g. in structuring your project). Yuck! This seems like a bearable hack for files that are only for testing, but not those that will be installed with the package.-m
script, the search path starts in the working directory instead of the bin/
directory. See my answer below for more details.setup.py develop
command in logc's answer and pip install -e
in mine. They avoid having to re-"install" for every little edit, but you still need to create a setup.py
for packages you never intend to fully install, and doesn't work very well with PyCharm (which has a menu entry to run the develop
command but no easy way to run the scripts that it copies to the virtual environment).mylibsrc/
instead of mylibsrc/bin/
).
Since I posted this question, I've learnt that you can run a module as if it were a script using Python's -m
command-line switch (which I had thought only applied to packages).
So I think the best solution is this:
if __name__ == "__main__": main()
, as you would in a script.python -m pkg_name.module_name
PyCharm doesn't support running modules in this way (see this request). However, you can just run modules (and also scripts in bin) like normal because PyCharm automatically adds the project root to the PYTHONPATH, so import statements resolve without any further effort. There are a few gotchas for this though:
from pkg_name.other_module import fn
but not from .other_module import fn
. Relative imports are usually poor style anyway, but they're useful for unit tests.pkg_name.module_name
and once as __main__
). But you shouldn't have circular dependencies anyway.Bonus command line fun:
bin/
you can call them with python -m bin.scriptname
(but in Python 2 you'll need to put an __init__.py
in the bin directory).__main__.py
, like this: python -m pkg_name
There is an alternative for the command line, which is not as simple, but still worth knowing about:
pip install -e .
pip uninstall pkg_name
setup.py
's develop
command, but uninstallation seems to work better.The simplest way is to use setuptools
in your setup.py
script, and use the entry_points
keyword, see the documentation of Automatic Script Creation.
In more detail: you create a setup.py
that looks like this
from setuptools import setup
setup(
# other arguments here...
entry_points={
'console_scripts': [
'foo = my_package.some_module:main_func',
'bar = other_module:some_func',
],
'gui_scripts': [
'baz = my_package_gui:start_func',
]
}
)
then create other Python packages and modules underneath the directory where this setup.py
exists, e.g. following the above example:
.
├── my_package
│ ├── __init__.py
│ └── some_module.py
├── my_package_gui
│ └── __init__.py
├── other_module.py
└── setup.py
and then run
$ python setup.py install
or
$ python setup.py develop
Either way, new Python scripts (executable scripts without the .py
suffix) are created for you that point to the entry points you have described in setup.py
. Usually, they are at the Python interpreter's notion of "directory where executable binaries should be", which is usually on your PATH already. If you are using a virtual env, then virtualenv
tricks the Python interpreter into thinking this directory is bin/
under wherever you have defined that the virtualenv should be. Following the example above, in a virtualenv, running the previous commands should result in:
bin
├── bar
├── baz
└── foo
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With