I created a Stack class as an exercise in Python, using all list functions. For example, Stack.push() is just list.append(), Stack.pop() is list.pop() and Stack.isEmpty() is just list == [ ].
I was using my Stack class to implement a decimal to binary converter, and what I noticed is that even though the two functions are completely equivalent beyond the wrapping of my Stack class for push(), pop() and isEmpty(), the implementation using the Stack class is twice as slow as the implementation using Python's list.
Is that because there's always an inherent overhead to using classes in Python? And if so, where does the overhead come from technically speaking ("under the hood")? Finally, if the overhead is so significant, isn't it better not to use classes unless you absolutely have to?
def dectobin1(num):
s = Stack()
while num > 0:
s.push(num % 2)
num = num // 2
binnum = ''
while not s.isEmpty():
binnum = binnum + str(s.pop())
return binnum
def dectobin2(num):
l = []
while num > 0:
l.append(num % 2)
num = num // 2
binnum = ''
while not l == []:
binnum = binnum + str(l.pop())
return binnum
t1 = Timer('dectobin1(255)', 'from __main__ import dectobin1')
print(t1.timeit(number = 1000))
0.0211110115051
t2 = Timer('dectobin2(255)', 'from __main__ import dectobin2')
print(t2.timeit(number = 1000))
0.0094211101532
No. In general you will not notice any difference in performance based on using classes or not. The different code structures implied may mean that one is faster than the other, but it's impossible to say which. Always write code to be read, then if, and only if, it's not fast enough make it faster.
A single class definition can be used to create multiple objects. As mentioned before, objects are independent. Changes made to one object generally do not affect the other objects representing the same class. Each object has its own unique set of data attributes.
yes, if you define a class with the same name as an already existing class, it will override the definition. BUT existing instances of the first class will still behave as usual.
First off, a warning: Function calls are rarely what limits you in speed. This is often an unnecessary micro-optimisation. Only do that, if it is what actually limits your performance. Do some good profiling before and have a look if there might be a better way to optimise.
Make sure you don't sacrifice legibility for this tiny performance tweak!
Classes in Python are a little bit of a hack.
The way it works is that each object has a __dict__
field (a dict) which contains all attributes the object contains. Also each object has a __class__
object which again contains a __dict__
field (again a dict) which contains all class attributes.
So for example have a look at this:
>>> class X(): # I know this is an old-style class declaration, but this causes far less clutter for this demonstration
... def y(self):
... pass
...
>>> x = X()
>>> x.__class__.__dict__
{'y': <function y at 0x6ffffe29938>, '__module__': '__main__', '__doc__': None}
If you define a function dynamically (so not in the class declaration but after the object creation) the function does not go to the x.__class__.__dict__
but instead to x.__dict__
.
Also there are two dicts that hold all variables accessible from the current function. There is globals()
and locals()
which include all global and local variables.
So now let's say, you have an object x
of class X
with functions y
and z
that were declared in the class declaration and a second function z
, which was defined dynamically. Let's say object x
is defined in global space.
Also, for comparison, there are two functions flocal()
, which was defined in local space and fglobal()
, which was defined in global space.
Now I will show what happens if you call each of these functions:
flocal():
locals()["flocal"]()
fglobal():
locals()["fglobal"] -> not found
globals()["fglobal"]()
x.y():
locals()["x"] -> not found
globals()["x"].__dict__["y"] -> not found, because y is in class space
.__class__.__dict__["y"]()
x.z():
locals()["x"] -> not found
globals()["x"].__dict__["z"]() -> found in object dict, ignoring z() in class space
So as you see, class space methods take a lot more time to lookup, object space methods are slow as well. The fastest option is a local function.
But you can get around that without sacrificing classes. Lets say, x.y() is called quite a lot and needs to be optimised.
class X():
def y(self):
pass
x = X()
for i in range(100000):
x.y() # slow
y = x.y # move the function lookup outside of loop
for i in range(100000):
y() # faster
Similar things happen with member variables of objects. They are also slower than local variables. The effect also adds up, if you call a function or use a member variable that is in an object that is a member variable of a different object. So for example
a.b.c.d.e.f()
would be a fair bit slower as each dot needs another dictionary lookup.
An official Python performance guide reccomends to avoid dots in performance critical parts of the code: https://wiki.python.org/moin/PythonSpeed/PerformanceTips
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With