I had an idea to transform all given functions that are tagged using a decorator similar to the below,
@transform_ast
def foo(x):
return x
In transform_ast
, I get the source, extract the ast, transform it, and then create a code object and function type from it again. It looks something like the below,
import ast
import inspect
import types
class Rewrite(ast.NodeTransformer):
pass
def transform_ast(f):
source = inspect.getsource(f)
source = '\n'.join(source.splitlines()[1:]) # remove the decorator first line.
print(source)
old_code_obj = f.__code__
old_ast = ast.parse(source)
new_ast = Rewrite().visit(old_ast)
new_code_obj = compile(new_ast, old_code_obj.co_filename, 'exec')
new_f = types.FunctionType(new_code_obj, {})
return new_f
@transform_ast
def foo(x):
return x
However, it doesn't seem to work properly when I subsequently call foo(x).
For all practical purposes, we can assume my transform is just re-writing return x
to return x+1
. Ideally I would like everything to work as per-normal, including being able to step into the function with a debugger...
Calling foo(10)
, would give the following error,
TypeError: module() takes no arguments (1 given)
Is there anything that I'm doing wrong?
Nesting means placing or storing inside the other. Therefore, Nested Decorators means applying more than one decorator inside a function. Python allows us to implement more than one decorator to a function. It makes decorators useful for reusable building blocks as it accumulates the several effects together.
Decorators dynamically alter the functionality of a function, method, or class without having to directly use subclasses or change the source code of the function being decorated. Using decorators in Python also ensures that your code is DRY(Don't Repeat Yourself).
The decorator arguments are accessible to the inner decorator through a closure, exactly like how the wrapped() inner function can access f . And since closures extend to all the levels of inner functions, arg is also accessible from within wrapped() if necessary.
new_code_obj = compile(new_ast, old_code_obj.co_filename, 'exec')
The code compiled with exec
mode is always treated as module-level code, though, of course, it can contain function or class definitions, or any other valid Python).
To verify this, you can access co_name
attribute of the code object to get the name with which this code object was defined.
>>> new_code_obj.co_name
<module>
ie, new_code_obj
is a code object corresponding to a module. but where is the code object corresponding to the function foo
. How can we access that?
It can access from the co_consts
attribute of the code object which is a tuple of constants used in the bytecode
>>> new_code_obj.co_consts
(<code object foo at 0x031C3DE0, file "c:/Users/test.py", line 1>, 'foo', None)
>>> new_code_obj.co_consts[0]
<code object foo at 0x031C3DE0, file "c:/Users/test.py", line 1>
To verify this code object is from the function foo
you can use the co_name
attribute again.
>>> new_code_obj.co_consts[0].co_name
foo
So while creating the new FunctionType
you should use the code object corresponding the function foo
instead of the module
code object.
So changing
new_f = types.FunctionType(new_code_obj, {})
to
new_f = types.FunctionType(new_code_obj.co_consts[0], f.__globals__)
# Here `f` is the function object passed to the `transform_ast`
will solve the problem.
Additional Refs: Exploring Python Code Objects
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With