Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is pandas.series.map so shockingly slow?

Tags:

python

pandas

Some days I just hate using middleware. Take this for example: I'd like to have a lookup table that maps values from a set of inputs (domain) values, to outputs (range) values. The mapping is unique. A Python map can do this, but since the map is quite big I figured, why not use a ps.Series and its index, which has added benefit that I can:

  • pass in multiple values to be mapped as a series (hopefully faster than dictionary lookup)
  • the original series' index in maintained in the result

like so:

domain2range = pd.Series(allrangevals, index=alldomainvals)
# Apply the map
query_vals = pd.Series(domainvals, index=someindex)
result = query_vals.map(domain2range)
assert result.index is someindex # Nice
assert (result.values in allrangevals).all() # Nice

Works as expected. But not. The above .map's time cost grows with len(domain2range) not (more sensibly) O(len(query_vals)) as can be shown:

numiter = 100
for n in [10, 1000, 1000000, 10000000,]:
    domain = np.arange(0, n)
    range = domain+10
    maptable = pd.Series(range, index=domain).sort_index()

    query_vals = pd.Series([1,2,3])
    def f():
        query_vals.map(maptable)
    print n, timeit.timeit(stmt=f, number=numiter)/numiter


10 0.000630810260773
1000 0.000978469848633
1000000 0.00130645036697
10000000 0.0162791204453

facepalm. At n=10000000 its taken (0.01/3) second per mapped value.

So, questions:

  • is Series.map expected to behave like this? Why is it so utterly, ridiculously slow? I think I'm using it as shown in the docs.
  • is there a fast way to use pandas to do table-lookup. It seems like the above is not it?
like image 728
user48956 Avatar asked Oct 16 '22 17:10

user48956


1 Answers

https://github.com/pandas-dev/pandas/issues/21278

Warmup was the issue. (double facepalm). Pandas silently builds and caches a hash index at first use (O(maplen)). Calling the tested function and prebuilding the indexgets much better performance.

numiter = 100
for n in [10, 100000, 1000000, 10000000,]:
    domain = np.arange(0, n)
    range = domain+10
    maptable = pd.Series(range, index=domain) #.sort_index()

    query_vals = pd.Series([1,2,3])

    def f1():
        query_vals.map(maptable)
    f1()
    print "Pandas1 ", n, timeit.timeit(stmt=f1, number=numiter)/numiter

    def f2():
        query_vals.map(maptable.get)
    f2()
    print "Pandas2 ", n, timeit.timeit(stmt=f2, number=numiter)/numiter

    maptabledict = maptable.to_dict()
    query_vals_list = pd.Series([1,2,3]).tolist()

    def f3():
        {k: maptabledict[k] for k in query_vals_list}
    f3()
    print "Py dict ", n, timeit.timeit(stmt=f3, number=numiter)/numiter
    print

pd.show_versions()
Pandas1  10 0.000621199607849
Pandas2  10 0.000686831474304
Py dict  10 2.0170211792e-05

Pandas1  100000 0.00149286031723
Pandas2  100000 0.00118808984756
Py dict  100000 8.47816467285e-06

Pandas1  1000000 0.000708899497986
Pandas2  1000000 0.000479419231415
Py dict  1000000 1.64794921875e-05

Pandas1  10000000 0.000798969268799
Pandas2  10000000 0.000410139560699
Py dict  10000000 1.47914886475e-05

... although a little depressing that python dictionaries are 10x faster.

like image 132
user48956 Avatar answered Oct 21 '22 01:10

user48956