Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is an in-place integer operation like a *= b slower than a = a * b?

Tags:

python

I know integers are immutable so the computed values do not modify the original integers. Therefore the in-place operations should do the same as the simple operations, 1. compute the value and 2. reassign the value back to the variable. But why are the in-place operations slower than the simple ones?

import timeit
print("a = a + 1: ", end="")
print(timeit.timeit("for i in range(100): a = a + 1", setup="a = 0"))
print("a += 1: ", end="")
print(timeit.timeit("for i in range(100): a += 1", setup="a = 0"))

print("a = a - 1: ", end="")
print(timeit.timeit("for i in range(100): a = a - 1", setup="a = 0"))
print("a -= 1: ", end="")
print(timeit.timeit("for i in range(100): a -= 1", setup="a = 0"))

print("a = a * 1: ", end="")
print(timeit.timeit("for i in range(100): a = a * 1", setup="a = 1"))
print("a *= 1: ", end="")
print(timeit.timeit("for i in range(100): a *= 1", setup="a = 1"))

print("a = a // 1: ", end="")
print(timeit.timeit("for i in range(100): a = a // 1", setup="a = 1"))
print("a //= 1: ", end="")
print(timeit.timeit("for i in range(100): a //= 1", setup="a = 1"))

Output:

a = a + 1: 2.922127154
a += 1: 2.9701245480000003
a = a - 1: 2.9568866799999993
a -= 1: 3.1065419050000003
a = a * 1: 2.2483990140000003
a *= 1: 2.703524648
a = a // 1: 2.534561783000001
a //= 1: 2.6582312889999997

All the in-place operations are slower than the simple ones. Addition has the smallest difference while multiplication has the greatest.

like image 685
adamkwm Avatar asked May 21 '21 09:05

adamkwm


1 Answers

There may be a problem in that experiment: a single for-loop with 100 iterations and only containing an assignment statement like a=a+1 or a+=1 normally will not take that long to run (more than a second).

Compare those results using timeit to the following direct execution of the same for-loop:

def not_in_place(n, a=0):
    for i in range(n): a = a + 1
    return a

def in_place(n, a=0):
    for i in range(n): a += 1
    return a

As expected, it takes almost 100 million iterations to get similar times (a magnitude of seconds):

not_in_place(100_000_000)
in_place(100_000_000)

(Edit: As pointed out in the comment: the timed statement, that contains 100 iterations, was wrapped in a million-iteration loop.)

We still need to establish which is faster: in-place (a+=1) or not in-place (a=a+1).

To do so, we need to observe how both cases behave given an increasing number of iterations:

import perfplot
perfplot.bench(
    n_range=[2**k for k in range(26)],
    setup=lambda n: n,
    kernels=[not_in_place, in_place],
    labels=["a = a + 1", "a += 1"],
).show()

The observed difference in the first 100 iterations using timeit couldn't be replicated in large scale runs using the same for-loop code in callback functions for each run:

enter image description here

Most importantly: It seems that the time difference between both cases (in-place and not in-place) is invariant to the number of iterations.

The same goes for the other operators: (-, *, etc.):

enter image description here enter image description here

like image 89
Marco Oliveira Avatar answered Oct 07 '22 02:10

Marco Oliveira