Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Permanently caching results of Python class generation

I am doing dynamic class generation that could be statically determined at "compile" time. The simple case that I have right now looks more or less like this:

class Base(object):
    def __init__(self, **kwargs):
        self.do_something()

def ClassFactory(*args):
    some_pre_processing()
    class GenericChild(Base):
        def __init__(self, **kwargs):
            self.some_processing()
            super(GenericChild, self).__init__(*args, **kwargs)
    return GenericChild

Child1 = ClassFactory(1, 'Child_setting_value1')
Child2 = ClassFactory(2, 'Child_setting_value2')
Child3 = ClassFactory(3, 'Child_setting_value3')

On import, the Python interpreter seems to compile to bytecode, then execute the file (thus generating Child1, Child2, and Child3) once per Python instance.

Is there a way to tell Python to compile the file, execute it once to unpack the Child classes, then compile that into the pyc file, so that the unpacking only happens once (even across successive executions of the Python script)?

I have other use cases that are more complicated and expansive, so simply getting rid of the factory by hand-writing the Child classes is not really an option. Also, I would like to avoid an extra preprocessor step if possible (like using the C-style macros with the C preprocessor).

like image 952
CSTobey Avatar asked Nov 13 '22 10:11

CSTobey


1 Answers

No, you'd have to generate Python code instead where those classes are 'baked' to python code instead.

Use some form of string templating where you generate Python source code, save those to .py files, then bytecompile those.

However, the class generation happens only once on startup. Is it really that great a cost to generate these?

like image 70
Martijn Pieters Avatar answered Nov 15 '22 07:11

Martijn Pieters