I am using Pandas to structure and process Data.
I have here a DataFrame with dates as index, Id and bitrate. I want to group my Data by Id and resample, at the same time, timedates which are relative to every Id, and finally keep the bitrate score.
For example, given :
df = pd.DataFrame(
{'Id' : ['CODI126640013.ts', 'CODI126622312.ts'],
'beginning_time':['2016-07-08 02:17:42', '2016-07-08 02:05:35'],
'end_time' :['2016-07-08 02:17:55', '2016-07-08 02:26:11'],
'bitrate': ['3750000', '3750000'],
'type' : ['vod', 'catchup'],
'unique_id' : ['f2514f6b-ce7e-4e1a-8f6a-3ac5d524be30', 'f2514f6b-ce7e-4e1a-8f6a-3ac5d524bb22']})
which gives :
This is my code to get a unique column for dates with every time the Id and the bitrate :
df = df.drop(['type', 'unique_id'], axis=1)
df.beginning_time = pd.to_datetime(df.beginning_time)
df.end_time = pd.to_datetime(df.end_time)
df = pd.melt(df, id_vars=['Id','bitrate'], value_name='dates').drop('variable', axis=1)
df.set_index('dates', inplace=True)
which gives :
And now, time for Resample ! This is my code :
print (df.groupby('Id').resample('1S').ffill())
And this is the result :
This is exactly what I want to do ! I have 38279 logs with the same columns and I have an error message when I do the same thing. The first part works perfectly, and gives this :
The part (df.groupby('Id').resample('1S').ffill()) gives this error message :
ValueError: cannot reindex a non-unique index with a method or limit
Any ideas ? Thnx !
It seems there is problem with duplicates in columns beginning_time
and end_time
, I try simulate it:
df = pd.DataFrame(
{'Id' : ['CODI126640013.ts', 'CODI126622312.ts', 'a'],
'beginning_time':['2016-07-08 02:17:42', '2016-07-08 02:17:42', '2016-07-08 02:17:45'],
'end_time' :['2016-07-08 02:17:42', '2016-07-08 02:17:42', '2016-07-08 02:17:42'],
'bitrate': ['3750000', '3750000', '444'],
'type' : ['vod', 'catchup', 's'],
'unique_id':['f2514f6b-ce7e-4e1a-8f6a-3ac5d524be30', 'f2514f6b-ce7e-4e1a-8f6a-3ac5d524bb22','w']})
print (df)
Id beginning_time bitrate end_time \
0 CODI126640013.ts 2016-07-08 02:17:42 3750000 2016-07-08 02:17:42
1 CODI126622312.ts 2016-07-08 02:17:42 3750000 2016-07-08 02:17:42
2 a 2016-07-08 02:17:45 444 2016-07-08 02:17:42
type unique_id
0 vod f2514f6b-ce7e-4e1a-8f6a-3ac5d524be30
1 catchup f2514f6b-ce7e-4e1a-8f6a-3ac5d524bb22
2 s w
df = df.drop(['type', 'unique_id'], axis=1)
df.beginning_time = pd.to_datetime(df.beginning_time)
df.end_time = pd.to_datetime(df.end_time)
df = pd.melt(df, id_vars=['Id','bitrate'], value_name='dates').drop('variable', axis=1)
df.set_index('dates', inplace=True)
print (df)
Id bitrate
dates
2016-07-08 02:17:42 CODI126640013.ts 3750000
2016-07-08 02:17:42 CODI126622312.ts 3750000
2016-07-08 02:17:45 a 444
2016-07-08 02:17:42 CODI126640013.ts 3750000
2016-07-08 02:17:42 CODI126622312.ts 3750000
2016-07-08 02:17:42 a 444
print (df.groupby('Id').resample('1S').ffill())
ValueError: cannot reindex a non-unique index with a method or limit
One possible solution is add drop_duplicates
and use old way for resample
with groupby
:
df = df.drop(['type', 'unique_id'], axis=1)
df.beginning_time = pd.to_datetime(df.beginning_time)
df.end_time = pd.to_datetime(df.end_time)
df = pd.melt(df, id_vars=['Id','bitrate'], value_name='dates').drop('variable', axis=1)
print (df.groupby('Id').apply(lambda x : x.drop_duplicates('dates')
.set_index('dates')
.resample('1S')
.ffill()))
Id bitrate
Id dates
CODI126622312.ts 2016-07-08 02:17:42 CODI126622312.ts 3750000
CODI126640013.ts 2016-07-08 02:17:42 CODI126640013.ts 3750000
a 2016-07-08 02:17:41 a 444
2016-07-08 02:17:42 a 444
2016-07-08 02:17:43 a 444
2016-07-08 02:17:44 a 444
2016-07-08 02:17:45 a 444
You can also check duplicates by boolean indexing
:
print (df[df.beginning_time == df.end_time])
2 s w
Id beginning_time bitrate end_time \
0 CODI126640013.ts 2016-07-08 02:17:42 3750000 2016-07-08 02:17:42
1 CODI126622312.ts 2016-07-08 02:17:42 3750000 2016-07-08 02:17:42
type unique_id
0 vod f2514f6b-ce7e-4e1a-8f6a-3ac5d524be30
1 catchup f2514f6b-ce7e-4e1a-8f6a-3ac5d524bb22
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With