I'm running a website using Django, and I import ipdb at the beginning of almost all of my scripts to make debugging easier. However, most of the time I never use the functions from the module (only when I'm debugging).
Just wondering, will this decrease my performance? It's just that when I want to create a breakpoint I prefer to write:
ipdb.set_trace()
as opposed to:
import ipdb; ipdb.set_trace()
But I've seen the second example done in several places, which makes me wonder if it's more efficient...
I just don't know how importing python modules relates to efficiency (assuming you're not using the module methods within your script).
Startup and Module Importing Overhead. Starting a Python interpreter and importing Python modules is relatively slow if you care about milliseconds. If you need to start hundreds or thousands of Python processes as part of a workload, this overhead will amount to several seconds of overhead.
When a module is first imported, Python searches for the module and if found, it creates a module object 1, initializing it. If the named module cannot be found, a ModuleNotFoundError is raised. Python implements various strategies to search for the named module when the import machinery is invoked.
When you import a module in Python, all the code in it will be run, and all the variables in that module will be stuck on that module object. This normally isn't a problem unless we have side effects at import time.
In fact, Python only loads the module when it is imported in the first file and all subsequent files simply set the name to refer to the already loaded module. While it is possible to override this behavior so that each file has its own copy of the module, it is generally not recommended.
“Lazy” means that the import of a module (execution of the module body and addition of the module object to sys. modules ) should not occur until the module (or a name imported from it) is actually referenced during execution.
As @wRAR mentioned, Loading a module may imply executing any amounts of code which can take any amount of time. On the other hand, the module will only be loaded once and any subsequent attempt to import will find the module present in os.sys.modules
and reference to that.
In a Django environment in debuging mode, modules are removed from Django's AppCache
and actually re-imported only when they are changed, which you will probably not do with ipdb
, so in your case it should not be an issue.
However, in cases it would be an issue, there are some ways around it. Suppose you have a custom module that you use to load anyway, you can add a function to it that imports ipdb
only when you require it:
# much used module: mymodule
def set_trace():
import ipdb
ipdb.set_trace()
in the module you want to use ipdb.set_trace
:
import mymodule
mymodule.set_trace()
or, on top of your module, use the cross-module __debug__
variable:
if __debug__:
from ipdp import set_trace
else:
def set_trace(): return
Short answer: Not usually
Long answer:
It will take time to load the module. This may be noticeable if you are loading python off a network drive or other slow source. But if running directly off a hard drive you'll never notice.
As @wRar points out, importing a module can execute any amount of code. You can have whatever code you want executed at module startup. However, most modules avoid executing unreasonable amounts of code during startup. So that itself probably isn't a huge cause.
However, importing very large modules especially those that also result in importing a large number of c modules will take time.
So importing will take time, but only once per module imported. If you import modules at the top of your modules (as opposed to in functions) it only applies to startup time anyways. Basically, you aren't going to get much optimisation mileage out of avoiding importing modules.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With