I am trying to understand what the best practices are with regards to Python's (v2.7) import mechanics. I have a project that has started to grow a bit and lets say my code is organised as follows:
foo/ __init__.py Foo.py module1.py module2.py module3.py
The package name is foo
and underneath it I have module Foo.py
which contains code for the class Foo
. Hence I am using the same name for the package, module and class which might not be very clever to start with.
__init__.py
is empty and class Foo
needs to import module1, module2 and module3
hence part of my Foo.py
file looks like:
# foo/Foo.py import module1 import module2 import module3 class Foo(object): def __init__(self): .... .... if __name__ == '__main__': foo_obj = Foo()
However I later revisited this and I thought it would be better to have all imports in the __init__.py
file. Hence my __init__.py
now looks like:
# foo/__init__.py import Foo import module1 import module2 import module3 .... ....
and my Foo.py
only needs to import foo
:
# foo/Foo.py import foo
While this looks convenient since it is an one liner, I am a bit worried that it might be creating circular imports. What I mean is that when the script Foo.py
is run it will import everything it can and then __init__.py
will be called which will import Foo.py
again (is that correct?). Additionally using the same name for package, module and class makes things more confusing.
Does it make sense the way I have done it? Or am I asking for trouble?
The __init__.py file makes Python treat directories containing it as modules. Furthermore, this is the first file to be loaded in a module, so you can use it to execute code that you want to run each time a module is loaded, or specify the submodules to be exported.
The __init__.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as string, from unintentionally hiding valid modules that occur later on the module search path.
The __init__.py files are required to make Python treat directories containing the file as packages. This prevents directories with a common name, such as string , unintentionally hiding valid modules that occur later on the module search path.
__import__() Parameters name - the name of the module you want to import. globals and locals - determines how to interpret name. fromlist - objects or submodules that should be imported by name. level - specifies whether to use absolute or relative imports.
A couple things you could do to improve your organizaton, if only to adhere to some popular python conventions and standards.
If you search this topic, you will inevitably run across people recommending the PEP8 guidelines. These are the de facto canonical standards for organizing python code.
Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability. Python packages should also have short, all-lowercase names, although the use of underscores is discouraged.
Based on these guidelines, your project modules should be named like this:
foo/ __init__.py foo.py module1.py module2.py module3.py
I find it's generally best to avoid importing modules unnecessarily in __init__.py
unless you're doing it for namespace reasons. For example, if you want the namespace for your package to look like this
from foo import Foo
instead of
from foo.foo import Foo
Then it makes sense to put
from .foo import Foo
in your __init__.py
. As your package gets larger, some users may not want to use all of the sub-packages and modules, so it doesn't make sense to force the user to wait for all those modules to load by implicitly importing them in your __init__.py
. Also, you have to consider whether you even want module1
, module2
, and module3
as part of your external API. Are they only used by Foo
and not intended to be for end users? If they're only used internally, then don't include them in the __init__.py
I'd also recommend using absolute or explicit relative imports for importing sub-modules. For example, in foo.py
from foo import module1 from foo import module2 from foo import module3
from . import module1 from . import module2 from . import module3
This will prevent any possible naming issues with other packages and modules. It will also make it easier if you decide to support Python3, since the implicit relative import syntax you're currently using is not supported in Python3.
Also, files inside your package generally shouldn't contain a
if __name__ == '__main__'
This is because running a file as a script means it won't be considered part of the package that it belongs to, so it won't be able to make relative imports.
The best way to provide executable scripts to users is by using the scripts
or console_scripts
feature of setuptools
. The way you organize your scripts can be different depending on which method you use, but I generally organize mine like this:
foo/ __init__.py foo.py ... scripts/ foo_script.py setup.py
According to PEP 0008, "Public and internal interfaces":
Imported names should always be considered an implementation detail. Other modules must not rely on indirect access to such imported names unless they are an explicitly documented part of the containing module's API, such as os.path or a package's
__init__
module that exposes functionality from submodules.
So this would suggest that it is ok to put imports in the __init__
module, if __init__
is being used to expose functions from submodules. Here is a short blog post I found with a couple examples of Pythonic uses of __init__
, using imports to make subpackages available at package level.
Your example of moving the import statements to __init__
in order to have only one import in Foo
, does not seem to follow this rule. My interpretation is that the imports in your __init__
should be used for external interfaces, otherwise, just put your import statements in the file that needs them. This saves you trouble when submodule names change and keeps you from unnecessary or difficult-to-find imports when you add more files that use a different subset of submodules.
As far as circular references, this is definitely possible in Python (for example). I wrote about that before I actually tried your toy example, but to make the example work I had to move Foo.py
up a level, like so:
Foo.py foo/ __init__.py module1.py module2.py module3.py
With that setup and some print statements, running python Foo.py
gives the output:
module 1 module 2 module 3 hello Foo constructor
and exits normally. Note that this is due to adding the if __name__ == "__main__"
- if you add a print statement outside of that, you can see Python is still loading the module twice. A better solution would be to remove the import from your __init__.py
. As I said earlier, that may or may not make sense, depending on what those submodules are.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With