I just noticed that you can do this in Python:
def f(self):
print self.name
class A:
z=f
name="A"
class B:
z=f
name = "B"
…
print a.z()
>>> A
In other words, f()
behaves like a method that is not defined on any class, but can be attached to one. And of course it will produce a runtime error if it expects methods or fields on the object that it is attached to, which don't exist.
My question: is this useful? Does it serve a purpose? Are there situations where it solves a problem? Maybe it's a way of defining interfaces?
Classes are great if you need to keep state, because they containerize data (variables) and behavior (methods) that act on that data and should logically be grouped together. This leads to code that is better organized (cleaner) and easier to reuse.
Class methods are typically useful when we need to access the class itself — for example, when we want to create a factory method, that is a method that creates instances of the class. In other words, classmethods can serve as alternative constructors.
Python classes provide all the standard features of Object Oriented Programming: the class inheritance mechanism allows multiple base classes, a derived class can override any methods of its base class or classes, and a method can call the method of a base class with the same name.
As a rule of thumb, when you have a set of data with a specific structure and you want to perform specific methods on it, use a class. That is only valid, however, if you use multiple data structures in your code. If your whole code won't ever deal with more than one structure.
Yes, it is useful and it does serve a purpose, but it is also quite rare to have to do this. If you think you need to patch classes after they've been defined you should always stop and consider whether it really is the best way.
One situation is monkey-patching. I've done this in a large Plone system where some methods needed minor tweaks but there just wasn't any easy way to override the behaviour normally. In that situation where you have a complex library it provides an easy way to inject new or modified behaviour without having to change the original library.
The other situation that springs to mind is when you want a lot of methods that can be generated automatically. e.g. data driven tests.
def createTest(testcase, somedata, index):
def test(self):
"Do something with somedata and assert a result"
test_name = "test_%d" % index
setattr(testcase, test_name, test)
for index, somedata in enumerate(somebigtable):
createTest(MyTestCase, somedata, index)
when MyTestCase is a unittest.TestCase you could have one test that goes through all of the data but it stops at the first failure and you than have to try to figure out which line of data failed. By dynamically creating the methods all the tests run separately and the test name tells you which one failed (the original of the code above actually built a more meaningful name involving some of the data as well as the index).
You can't do that inside the body of the class because there's no way to either reference the class itself or its dictionary before the definition is complete. You can however do something similar with a metaclass as that lets you modify the class dict before creating the class itself and sometimes that is a cleaner way of doing the same sort of thing.
The other thing to note is that there are situations where this won't work. Some __xxx__
special methods cannot be overridden after the class has been created: the original definition is saved internally somewhere other than the class's __dict__
so any changes you make later may be ignored. Also if working with metaclasses sometimes additional functions won't get whatever treatment the metaclass gives to attributes as part of the class definition.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With