Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Looking for the fastest way to slice a row in a huge Pandas Dataframe

Tags:

python

pandas

My program needs to fetch a row based on the value in a column from a huge Pandas Dataframe. The response time is critical. I use the most common way to do it, for example:

df.loc[df['id'] == 500000, :]

Per timeit on my Mac it took 4 ms to complete the above operation on a dataframe with 1 million rows. But my goal is to reduce the time to like 0.4 ms. I once consider to convert this dataframe to a Set but Set is not ordered and does not natively support indexing or slicing. Any suggestions?

like image 557
Z.Wei Avatar asked Nov 04 '25 22:11

Z.Wei


1 Answers

Lets set this up:

import pandas as pd
import numpy as np
df = pd.DataFrame({"id": np.random.randint(100,size=(1000000,))})

Then let's benchmark some options. Your current boolean + .loc:

>>> timeit.timeit("df.loc[df['id'] == 50, :]", setup = "from __main__ import df", number=1000)
2.566220869999597

The query engine:

>>> timeit.timeit("df.query('id == 50')", setup = "from __main__ import df", number=1000)
14.591400260000228

Using the index as a separate lookup:

>>> idx = pd.Index(df['id'])
>>> timeit.timeit("df.loc[idx == 50, :]", setup = "from __main__ import df, idx", number=1000)
2.2155187300013495

Using the dataframe index for lookup:

>>> df.index = df["id"]
>>> timeit.timeit("df.loc[50, :]", setup = "from __main__ import df", number=1000)
2.625610274999417

And that .isin() idea that someone in the comments had:

>>> timeit.timeit("df.loc[df['id'].isin([50]), :]", setup = "from __main__ import df", number=1000)
9.542700138999862

Looks like with the exception of the query engine being slow (as expected) for a simple equality, you're not going to get much better than the lookup time you've got.

df_unique = pd.DataFrame({'id': range(1000000)})

Lets see how a unique ID might be helpful:

>>> timeit.timeit("df_unique.loc[df_unique['id'] == 50, :]", setup = "from __main__ import df_unique", number=1000)
1.9672015519990964

Then to a dict:

>>> df_unique.index = df_unique['id']
>>> df_dict = df_unique.to_dict(orient='index')
>>> timeit.timeit("df_dict[50]", setup = "from __main__ import df_dict", number=1000)
6.247700002859347e-05

Well, looks like this is a clear winner.

>>> timeit.timeit("pd.Series(df_dict[50])", setup = "from __main__ import df_dict, pd", number=1000)
0.2747819870000967

Even if you have to cast it back to a series for something this is an order of magnitude faster than before. (You also could map a series back into the dict very easily if needed and keep the speed of a dict lookup with no overhead)

like image 76
CJR Avatar answered Nov 06 '25 13:11

CJR



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!