Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Vectorizing a Pandas apply function for tz_convert

I'm have a dataframe where the hour column contains datetime data in UTC. I have a time_zone column with time zones for each observation, and I'm using it to convert hour to the local time and save it in a new column named local_hour. To do this, I'm using the following code:

import pandas as pd

# Sample dataframe
import pandas as pd
df = pd.DataFrame({
    'hour': ['2019-01-01 05:00:00', '2019-01-01 07:00:00', '2019-01-01 08:00:00'],
    'time_zone': ['US/Eastern', 'US/Central', 'US/Mountain']
})

# Ensure hour is in datetime format and localized to UTC
df['hour'] = pd.to_datetime(df['hour']).dt.tz_localize('UTC')

# Add local_hour column with hour in local time 
df['local_hour'] = df.apply(lambda row: row['hour'].tz_convert(row['time_zone']), axis=1)

df
    hour                        time_zone   local_hour
0   2019-01-01 05:00:00+00:00   US/Eastern  2019-01-01 00:00:00-05:00
1   2019-01-01 07:00:00+00:00   US/Central  2019-01-01 01:00:00-06:00
2   2019-01-01 08:00:00+00:00   US/Mountain 2019-01-01 01:00:00-07:00

The code works. However using apply runs quite slowly since in reality I have a large dataframe. Is there a way to vectorize this or otherwise speed it up?

Note: I have tried using the swifter package, but in my case it doesn't speed things up.

like image 633
Gaurav Bansal Avatar asked Oct 28 '25 08:10

Gaurav Bansal


1 Answers

From the assumption there is not an infinite number of time_zone, maybe you could perform a tz_convert per group, like:

df['local_hour'] = df.groupby('time_zone')['hour'].apply(lambda x: x.dt.tz_convert(x.name))
print (df)

                       hour    time_zone                 local_hour
0 2019-01-01 05:00:00+00:00   US/Eastern  2019-01-01 00:00:00-05:00
1 2019-01-01 07:00:00+00:00   US/Central  2019-01-01 01:00:00-06:00
2 2019-01-01 08:00:00+00:00  US/Mountain  2019-01-01 01:00:00-07:00

On the sample it will be probably slower than what you did, but on bigger data and groups, should be faster

For speed comparison, with the df of 3 rows you provided, it gives:

%timeit df.apply(lambda row: row['hour'].tz_convert(row['time_zone']), axis=1)
# 1.6 ms ± 102 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit df.groupby('time_zone')['hour'].apply(lambda x: x.dt.tz_convert(x.name))
# 2.58 ms ± 126 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

so apply is faster, but if you create a dataframe 1000 times bigger but with only 3 time_zones, then you get groupby about 20 times faster:

df = pd.concat([df]*1000, ignore_index=True)

%timeit df.apply(lambda row: row['hour'].tz_convert(row['time_zone']), axis=1)
# 585 ms ± 42.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit df.groupby('time_zone')['hour'].apply(lambda x: x.dt.tz_convert(x.name))
# 27.5 ms ± 2.15 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
like image 130
Ben.T Avatar answered Oct 29 '25 21:10

Ben.T



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!