I've discovered a new pattern. Is this pattern well known or what is the opinion about it?
Basically, I have a hard time scrubbing up and down source files to figure out what module imports are available and so forth, so now, instead of
import foo from bar.baz import quux def myFunction(): foo.this.that(quux)
I move all my imports into the function where they're actually used., like this:
def myFunction(): import foo from bar.baz import quux foo.this.that(quux)
This does a few things. First, I rarely accidentally pollute my modules with the contents of other modules. I could set the __all__
variable for the module, but then I'd have to update it as the module evolves, and that doesn't help the namespace pollution for code that actually lives in the module.
Second, I rarely end up with a litany of imports at the top of my modules, half or more of which I no longer need because I've refactored it. Finally, I find this pattern MUCH easier to read, since every referenced name is right there in the function body.
There are four main Python coding styles: imperative, functional, object-oriented, and procedural. (Some people combine imperative and functional coding styles while others view them as completely separate styles.)
PEP 8, sometimes spelled PEP8 or PEP-8, is a document that provides guidelines and best practices on how to write Python code. It was written in 2001 by Guido van Rossum, Barry Warsaw, and Nick Coghlan. The primary focus of PEP 8 is to improve the readability and consistency of Python code.
__import__() Parameters name - the name of the module you want to import. globals and locals - determines how to interpret name. fromlist - objects or submodules that should be imported by name. level - specifies whether to use absolute or relative imports.
The (previously) top-voted answer to this question is nicely formatted but absolutely wrong about performance. Let me demonstrate
import random def f(): L = [] for i in xrange(1000): L.append(random.random()) for i in xrange(1000): f() $ time python import.py real 0m0.721s user 0m0.412s sys 0m0.020s
def f(): import random L = [] for i in xrange(1000): L.append(random.random()) for i in xrange(1000): f() $ time python import2.py real 0m0.661s user 0m0.404s sys 0m0.008s
As you can see, it can be more efficient to import the module in the function. The reason for this is simple. It moves the reference from a global reference to a local reference. This means that, for CPython at least, the compiler will emit LOAD_FAST
instructions instead of LOAD_GLOBAL
instructions. These are, as the name implies, faster. The other answerer artificially inflated the performance hit of looking in sys.modules
by importing on every single iteration of the loop.
As a rule, it's best to import at the top but performance is not the reason if you are accessing the module a lot of times. The reasons are that one can keep track of what a module depends on more easily and that doing so is consistent with most of the rest of the Python universe.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With