Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What happened to pypy3 under the 10**10 loop time

Simple while code

@count_run_time
def test_while(l: int=0) -> (int, int):
    y = 0
    x = 0
    while x < l:
        y += x
        x += 1
    return x, y

When i use cpython(Python 3.6.8 (v3.6.8:3c6b436a57, Dec 24 2018, 02:04:31)) to run

test_while(10**5)
[func: test_while] cost @0.008665s
(100000, 4999950000)

test_while(10**6)
[func: test_while] cost @0.080222s
(1000000, 499999500000)

test_while(10**7)
[func: test_while] cost @0.814199s
(10000000, 49999995000000)

test_while(10**8)
[func: test_while] cost @7.944017s
(100000000, 4999999950000000)

test_while(10**9)
[func: test_while] cost @80.063558s
(1000000000, 499999999500000000)

test_while(10**10)
[func: test_while] cost @851.572578s
(10000000000, 49999999995000000000)

As can be seen from the results, as the number of loops increases, the time consumed also increases linearly.

Next, I try to run this loop under pypy3(Python 3.6.1 (784b254d6699, Apr 14 2019, 10:22:55), [PyPy 7.1.1-beta0 with GCC 4.2.1 Compatible Apple LLVM 10.0.0 (clang-1000.11.45.5)]), strange things have happened

test_while(10**5)
[func: test_while] cost @0.000117s
(100000, 4999950000)

test_while(10**6)
[func: test_while] cost @0.001446s
(1000000, 499999500000)

test_while(10**7)
[func: test_while] cost @0.010868s
(10000000, 49999995000000)

test_while(10**8)
[func: test_while] cost @0.105472s
(100000000, 4999999950000000)

test_while(10**9)
[func: test_while] cost @1.055387s
(1000000000, 499999999500000000)

test_while(10**10)
[func: test_while] cost @99.784553s
(10000000000, 49999999995000000000)

From the results, from 105-106, the growth of time is linear(10x). But at 10**10, time growth has increased 100 times.

What happened to pypy3 at 10**10?

like image 388
Jacky Xu Avatar asked Oct 16 '22 15:10

Jacky Xu


1 Answers

pypy achieves significant speedup by utilizing well-known compilation and optimization techniques. One of them is to use native 64bit integer arithmetics wherever possible, and fall back to biginteger implementations when the regular computation would cause an overflow in the optimized path. Your last testcase is the one where the result first exceeds the 64bit range, and I have a very strong feeling that this fallback to the slower method is causing a tenfold slowdown compared to the expected one. By the way, biginteger addition operations are not constant-time, they are linear time by the number of digits added, so I would expect superlinear measurements with bigger inputs in both cases. (although not a sudden tenfold increase because of this)

like image 58
Tamas Hegedus Avatar answered Oct 21 '22 07:10

Tamas Hegedus