In Python, if I have a child function within a parent function, is the child function "initialised" (created) every time the parent function is called? Is there any performance overhead associated with nesting a function within another?
Python has a "relatively high" function call overhead, it is the cost we pay for some of Python's most useful functionality.
So, in Python, nested functions have direct access to the variables and names that you define in the enclosing function. It provides a mechanism for encapsulating functions, creating helper solutions, and implementing closures and decorators.
Inner functions, also known as nested functions, are functions that you define inside other functions. In Python, this kind of function has direct access to variables and names defined in the enclosing function.
When coding we want to hide immaterial detail. E.g. we mark functions as private as much as we can (well in python we can't really, so we use a leading underscore convention _like_this ). So that's a good reason to use nested functions — help the reader understand that the logic of bar will not be used anywhere else.
In Python, it is possible to nest functions like this: def do_something (): def helper (): .... .... Unless Python handles this case more smartly, helper has to be created anew every time do_something is used.
These are the conditions you need to create a closure in Python: 1. There must be a nested function 2. The inner function has to refer to a value that is defined in the enclosing scope 3. The enclosing function has to return the nested function
A nested function is simply a function within another function, and is sometimes called an "inner function". There are many reasons why you would want to use nested functions, and we'll go over the most common in this article. How to define a nested function
The variables accessed by nested function will exist until the nested function object is destroyed, and they will be different for different instances of nested function since each one sees the variables from different scope instances.
The code object is pre-compiled so that part has no overhead. The function object gets built on every invocation -- it binds the function name to the code object, records default variables, etc.
Executive summary: It's not free.
>>> from dis import dis
>>> def foo():
def bar():
pass
return bar
>>> dis(foo)
2 0 LOAD_CONST 1 (<code object bar at 0x1017e2b30, file "<pyshell#5>", line 2>)
3 MAKE_FUNCTION 0
6 STORE_FAST 0 (bar)
4 9 LOAD_FAST 0 (bar)
12 RETURN_VALUE
Yes, a new object would be created each time. It's likely not an issue unless you have it in a tight loop. Profiling will tell you if it's a problem.
In [80]: def foo():
....: def bar():
....: pass
....: return bar
....:
In [81]: id(foo())
Out[81]: 29654024
In [82]: id(foo())
Out[82]: 29651384
There is an impact, but in most situations it is so small that you shouldn't worry about it - most non-trivial applications probably already have performance bottlenecks whose impacts are several orders of magnitude larger than this one. Worry instead about the readability and reusability of the code.
Here some code that compares the performance of redefining a function each time through a loop to reusing a predefined function instead.
import gc
from datetime import datetime
class StopWatch:
def __init__(self, name):
self.name = name
def __enter__(self):
gc.collect()
self.start = datetime.now()
def __exit__(self, type, value, traceback):
elapsed = datetime.now()-self.start
print '** Test "%s" took %s **' % (self.name, elapsed)
def foo():
def bar():
pass
return bar
def bar2():
pass
def foo2():
return bar2
num_iterations = 1000000
with StopWatch('FunctionDefinedEachTime') as sw:
result_foo = [foo() for i in range(num_iterations)]
with StopWatch('FunctionDefinedOnce') as sw:
result_foo2 = [foo2() for i in range(num_iterations)]
When I run this in Python 2.7 on my Macbook Air running OS X Lion I get:
** Test "FunctionDefinedEachTime" took 0:00:01.138531 **
** Test "FunctionDefinedOnce" took 0:00:00.270347 **
I was curious about this too, so I decided to figure out how much overhead this incurred. TL;DR, the answer is not much.
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from timeit import timeit
>>> def subfunc():
... pass
...
>>> def no_inner():
... return subfunc()
...
>>> def with_inner():
... def s():
... pass
... return s()
...
>>> timeit('[no_inner() for _ in range(1000000)]', setup='from __main__ import no_inner', number=1)
0.22971350199986773
>>> timeit('[with_inner() for _ in range(1000000)]', setup='from __main__ import with_inner', number=1)
0.2847519510000893
My instinct was to look at percents (with_inner is 24% slower), but that number is misleading in this case, since we'll never actually just return the value of an inner function from an outer function, especially with functions that don't actually do anything.
After making that mistake, I decided to compare it to other common things, to see when this does and does not matter:
>>> def no_inner():
... a = {}
... return subfunc()
...
>>> timeit('[no_inner() for _ in range(1000000)]', setup='from __main__ import no_inner', number=1)
0.3099582109998664
Looking at this, we can see that it takes less time than creating an empty dict (the fast way), so if you're doing anything non-trivial, this probably does not matter at all.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With