I have a df like below
col1, mydate
1, 25-DEC-2016 09:15:00
2, 25-DEC-2016 10:14:00
3, 25-DEC-2016 10:16:00
4, 25-DEC-2016 10:18:56
2, 25-DEC-2016 11:14:00
2, 25-DEC-2016 10:16:00
df.info(): mydate 323809 non-null object
I need to this dataframe according to time, like df having time less than 10:15:00, df having time less than 11:15:00
So created my slice intervals using
times=[pd.to_datetime(i) for i in '10:15:00','11:15:00','12:15:00','13:15:00','14:15:00','15:15:00', '15:30:00']
Then I convert my mydate type to time which takes a lot of time
df['mydate']=df4.mydate.apply(lambda x: pd.to_datetime(x,infer_datetime_format=True).time())
The above command I think can be optimised, or there should be a better/faster way.
Then I simply do
for time in times:
slice = df[df.mydate<time.time()]
My intent is only to compare df.mydate time with ['10:15:00','11:15:00','12:15:00','13:15:00','14:15:00','15:15:00', '15:30:00']
(but not dates) and simply subset df
The above way works fine for me but I am looking for a better way.
Additional: Interestingly sorting mydate was very fast (even though I did not convert to mydate col to datetime) using
df.sort_values(by='mydate')
which lets me think that my way of subsetting should be faster.
mydate col will always be in 25-DEC-2016 09:15:00
format (Note DEC not Dec) can I use format='%d-%b-%Y %H:%M:%S'
First of all, I suggest using pd.to_datetime
on the whole array/Series, so it would be:
pd.to_datetime(['10:15:00','11:15:00','12:15:00','13:15:00']).time
Rather than
[pd.to_datetime(i).time() for i in ['10:15:00','11:15:00','12:15:00','13:15:00']]
Secondly, you are right about the format. As stated in the documentation of pd.to_datetime
it is much faster (by x5-10 times) to use
pd.to_datetime(['25-DEC-2016 09:15:00', '25-DEC-2016 09:15:00'],
format='%d-%b-%Y %H:%M:%S')
Rather than
pd.to_datetime(['25-DEC-2016 09:15:00', '26-DEC-2016 09:15:00'],
infer_datetime_format=True)
Considering now your dataframe:
df = pd.DataFrame({'col1': [1, 2, 3, 2],
'mydate': ['25-DEC-2016 09:15:00',
'25-DEC-2016 11:15:00',
'26-DEC-2016 11:15:00',
'26-DEC-2016 12:15:00']})
>>>
col1 mydate
0 1 25-DEC-2016 09:15:00
1 2 25-DEC-2016 11:15:00
2 3 26-DEC-2016 11:15:00
3 2 26-DEC-2016 12:15:00
You can first transform the mydate
column in an actual datetime
Series:
df['mydate'] = pd.to_datetime(df.mydate, format='%d-%b-%Y %H:%M:%S')
Then you'll be able to access the date
and time
fields (and a lot more) through the dt
accessor:
df.mydate.dt.date
>>>
0 2016-12-25
1 2016-12-25
2 2016-12-26
3 2016-12-26
df.mydate.dt.time
>>>
0 09:15:00
1 11:15:00
2 11:15:00
3 12:15:00
So when computing the slices you can use:
for time in times:
slice = df[df.mydate.dt.time < time]
print(time, slice, sep='\n')
>>>
10:15:00
col1 mydate
0 1 2016-12-25 09:15:00
11:15:00
col1 mydate
0 1 2016-12-25 09:15:00
12:15:00
col1 mydate
0 1 2016-12-25 09:15:00
1 2 2016-12-25 11:15:00
2 3 2016-12-26 11:15:00
13:15:00
col1 mydate
0 1 2016-12-25 09:15:00
1 2 2016-12-25 11:15:00
2 3 2016-12-26 11:15:00
3 2 2016-12-26 12:15:00
Note how what you get are not actually slices, because they have overlapping records, so you might want to use something similar to:
for start, end in zip(times, times[1:]):
slice = df[(start <= df.mydate.dt.time) & (df.mydate.dt.time <= end)]
As a final note, what you are trying to accomplish with the for loop can be obtained using the group by operations from pandas. You just need to prepare a mytime
column with the times only:
df['mytime'] = df.mydate.dt.time
groups = df.groupby('mytime')
for group_key, group_df in groups:
print(group_key, group_df, sep='\n')
>>>
09:15:00
col1 mydate mytime
0 1 2016-12-25 09:15:00 09:15:00
11:15:00
col1 mydate mytime
1 2 2016-12-25 11:15:00 11:15:00
2 3 2016-12-26 11:15:00 11:15:00
12:15:00
col1 mydate mytime
3 2 2016-12-26 12:15:00 12:15:00
The nice thing is that you don't need to operate on the single dataframes, but you can apply the same operations and aggregations on every group at the same time:
groups.size()
>>>
mytime
09:15:00 1
11:15:00 2
12:15:00 1
groups.sum()
>>>
col1
mytime
09:15:00 1
11:15:00 5
12:15:00 2
I believe timedelta
is better for working in pandas - so first split
string column and select times for converting:
df['mydate'] = pd.to_timedelta(df['mydate'].str.split().str[1])
print (df)
col1 mydate
0 1 09:15:00
1 2 10:14:00
2 3 10:16:00
3 4 10:18:56
4 2 11:14:00
5 2 10:16:00
Convert list
too:
times=pd.to_timedelta(['10:15:00','11:15:00','12:15:00',
'13:15:00','14:15:00','15:15:00', '15:30:00'])
print (times)
TimedeltaIndex(['10:15:00', '11:15:00', '12:15:00', '13:15:00', '14:15:00',
'15:15:00', '15:30:00'],
dtype='timedelta64[ns]', freq=None)
Last create slices:
for time in times:
sl = df[df.mydate<time]
print (sl)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With