Example directory structure of current project:
myproject/
|
+-- mylibrary/
| |
| +-- __init__.py
| |
| +-- (code files)
|
+-- webapi/
| |
| +-- __init__.py <-- contains Flask API code and blueprints, using the mylibrary code
| |
| +-- object/
| |
| +-- __init__.py <-- Flask API code for "object"s, imported via blueprints
|
+-- cli/
| |
| +-- __init__.py <-- argparse, code to use the mylibrary code from the CLI, etc.
|
+-- gui_app/
|
+-- __init__.py <-- the start of a GUI application, using the mylibrary code
I am the sole developer, and am developing all of the above pieces of the application in parallel.
My questions:
import
statements in the three projects (webapi
, cli
and gui_app
) to import code from the mylibrary
module? I see three options:
sys.path
to add ..
to the path of each app using the library. This works for the __init__.py
files, but I'm not sure how to do it from, say, webapi/object/__init__.py
without basically adding ../..
to sys.path
. Additionally, this makes the application much harder to later distribute.from .. import mylibrary
. This is a bad idea because it basically makes the entire mylibrary
module "monolithic".mylibrary
, install it into my site-packages
. Then just develop the other three applications using import mylibrary
. This is quite a pain however, since I'm developing all four pieces of the application in parallel.mylibrary
on its own, so it could be installed by e.g. pip. Then, separately, one can install webapi
or gui_app
. (Perhaps cli
could be merged into mylibrary
so that if it's run as a module it will present the CLI, but still doesn't solve the problem for the other two apps)This is my first experience writing a code library at the same time as more than one application that will make use of that library. What is the most "Pythonic", or more accurately, safest and least error-prone way to implement this?
I think you’re overcomplicating things.
Your end goal is that the library and application are both distributed separately, to be installed by pip
. To make that work, they just need to be separate packages that have their own separate setup.py
, and the application has the library in its ‘requirements.txt`.
And, once you do that, it solves your development issue as well. Create a virtual environment, and just pip install
the library into that env while working on the app.
There’s no need to manually symlink or copy anything anywhere. Or to write your tests in such a way that they can work with both an installed library and a relative-pathed library. Or anything else. And no messing with sys.path
; the library is in the venv’s site-packages, which is already in sys.path
. Making all of this just work is the whole point of virtual environments.
You don’t even need anything complicated in the readme. People just installing the app with pip (whether systemwide or in a venv) will get the library automatically. People working in the app can treat the library the same way they treat any other dependency. (They should already know how to create a venv and install requirements.txt into it.) The only real issue would be people working on the library who aren’t interested in the app but are using it as test code for the library—but you can just make that unnecessary by including sufficient test code with the library itself (which you probably want to do anyway).
If there’s a reason this won’t work for you, you can look at building a versioned subpackage out of the independent library package, the way requests
does with urllib3
or bs4
does with unicodedamnit
. But from your description, I don’t see any indication that you need that.
I think your problem may be that you think pip
can only install distributed packages off PyPI or another pip
repo. In fact, it's a lot more flexible than that. As the User Guide says:
pip
supports installing from PyPI, version control, local projects, and directly from distribution files.
If you look at the reference for pip install
, it can take any of these forms:
pip [options] <requirement specifier> [package-index-options] ...
pip [options] -r <requirements file> [package-index-options] ...
pip [options] [-e] <vcs project url> ...
pip [options] [-e] <local project path> ...
pip [options] <archive url/path> ...
If you're wondering how this works, oversimplifying a bit, all of the variations (unless you're dealing with prebuilt wheels, which isn't relevant here) come down to (only slightly oversimplified) getting the source downloaded/checked out/unpacked/etc. somewhere and doing a pip install .
, cd
ing into that somewhere, and doing a pip install .
there.
So, if you want to install your library's current working tree, you can just do this:
pip install /path/to/lib
Or, more often, you're going to already be in /path/to/lib
, so:
pip install .
You may want to throw in the -e
flag for dev-mode installs, or override the version checking (or just --force-reinstall
), or specify a fake (monotonically increasing) version number with #egg
, or ignore the requirements or constraints, or whatever, but all of those options work just as well for installing from a local path (or git
repo or branch or changelist) as they do for installing from PyPI.
Even for really complicated stuff, pip
can handle it fine. Want to switch back to what's on master
to compare how your current changes affect things? pip install git+file:/path/to/lib@master
. Want to give a custom branch to some guy so he can test against it? Push the branch and he can do pip install git+https://github.com/fdmillion/liblibrary@someguytest
. You'll probably never need any of this stuff, but almost anything you can imagine that you might need, pip
already has it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With