Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to find out discontinuous date time index? How to take average over continuous index?

I have a time series data. But data has discontinuity.(2005-03-02 02:08:00 is missing).

I need a new column C such that C(i)=A(i)+B(i)+average, where my average is the average of B up to the discontinuity (02:08:00).

average=Data.loc['2005-03-02 02:05:30':'2005-03-02 02:07:30',['B']].mean(axis=0)  
After discontinuity we have to again recalculate average till next discontinuity  
average=Data.loc['2005-03-02 02:08:30':'2005-03-02 02:11:00',['B']].mean(axis=0)

Input

Date,A,B  
2005-03-02 02:05:30,1,3   
2005-03-02 02:06:00,2,4   
2005-03-02 02:06:30,3,5  
2005-03-02 02:07:00,4,6  
2005-03-02 02:07:30,5,7  
2005-03-02 02:08:30,7,9  
2005-03-02 02:09:00,7,9  
2005-03-02 02:09:30,7,9  
2005-03-02 02:10:00,8,12  
2005-03-02 02:10:30,9,13  
2005-03-02 02:11:00,10,14

Output

Date,A,B,C  
2005-03-02 02:05:30,1,3,9  
2005-03-02 02:06:00,2,4,11  
2005-03-02 02:06:30,3,5,13  
2005-03-02 02:07:00,4,6,15  
2005-03-02 02:07:30,5,7,17  
2005-03-02 02:08:30,7,9,28  
2005-03-02 02:09:00,7,9,28  
2005-03-02 02:09:30,7,9,28  
2005-03-02 02:10:00,8,12,32  
2005-03-02 02:10:30,9,13,34  
2005-03-02 02:11:00,10,14,36  

How can I find out discontinuity in my index?.

How can I do the entire stuff using pandas?

like image 521
dbose Avatar asked Dec 17 '22 23:12

dbose


2 Answers

Step 1: Read in the dataframe

import pandas as pd
from io import StringIO

y = '''Date,A,B
2005-03-02 02:05:30,1,3   
2005-03-02 02:06:00,2,4   
2005-03-02 02:06:30,3,5  
2005-03-02 02:07:00,4,6  
2005-03-02 02:07:30,5,7  
2005-03-02 02:08:30,7,9  
2005-03-02 02:09:00,7,9  
2005-03-02 02:09:30,7,9  
2005-03-02 02:10:00,8,12  
2005-03-02 02:10:30,9,13  
2005-03-02 02:11:00,10,14'''

df = pd.read_csv(StringIO(y), index_col='Date')

Step 2: Convert to a datetime index

df.index = pd.to_datetime(df.index)

Step 2: Resample with a 30s duration

new = df.resample('30s').mean()

Output:

                        A   B  
Date                           
2005-03-02 02:05:30   1.0   3.0
2005-03-02 02:06:00   2.0   4.0
2005-03-02 02:06:30   3.0   5.0
2005-03-02 02:07:00   4.0   6.0
2005-03-02 02:07:30   5.0   7.0
2005-03-02 02:08:00   NaN   NaN
2005-03-02 02:08:30   7.0   9.0
2005-03-02 02:09:00   7.0   9.0
2005-03-02 02:09:30   7.0   9.0
2005-03-02 02:10:00   8.0  12.0
2005-03-02 02:10:30   9.0  13.0
2005-03-02 02:11:00  10.0  14.0

Step 3: Split Dataframe by NaN rows and get ID of groups

new["group_no"] = new.T.isnull().all().cumsum()

Output:

                        A   B    group_no
Date                                     
2005-03-02 02:05:30   1.0   3.0         0
2005-03-02 02:06:00   2.0   4.0         0
2005-03-02 02:06:30   3.0   5.0         0
2005-03-02 02:07:00   4.0   6.0         0
2005-03-02 02:07:30   5.0   7.0         0
2005-03-02 02:08:00   NaN   NaN         1
2005-03-02 02:08:30   7.0   9.0         1
2005-03-02 02:09:00   7.0   9.0         1
2005-03-02 02:09:30   7.0   9.0         1
2005-03-02 02:10:00   8.0  12.0         1
2005-03-02 02:10:30   9.0  13.0         1
2005-03-02 02:11:00  10.0  14.0         1

Step 4: Get mean of B for each group_no

new['Bmean'] = new.groupby('group_no').transform('mean').B

Output:

                        A     B  group_no  Bmean
Date                                            
2005-03-02 02:05:30   1.0   3.0         0    5.0
2005-03-02 02:06:00   2.0   4.0         0    5.0
2005-03-02 02:06:30   3.0   5.0         0    5.0
2005-03-02 02:07:00   4.0   6.0         0    5.0
2005-03-02 02:07:30   5.0   7.0         0    5.0
2005-03-02 02:08:00   NaN   NaN         1   11.0
2005-03-02 02:08:30   7.0   9.0         1   11.0
2005-03-02 02:09:00   7.0   9.0         1   11.0
2005-03-02 02:09:30   7.0   9.0         1   11.0
2005-03-02 02:10:00   8.0  12.0         1   11.0
2005-03-02 02:10:30   9.0  13.0         1   11.0
2005-03-02 02:11:00  10.0  14.0         1   11.0

Step 5: Apply necessary transformations and remove extra columns

new['C'] = new['A'] + new['B'] + new['Bmean']
new.drop(['group_no', 'Bmean'], axis=1, inplace=True)

Output:

                        A     B     C
Date                                 
2005-03-02 02:05:30   1.0   3.0   9.0
2005-03-02 02:06:00   2.0   4.0  11.0
2005-03-02 02:06:30   3.0   5.0  13.0
2005-03-02 02:07:00   4.0   6.0  15.0
2005-03-02 02:07:30   5.0   7.0  17.0
2005-03-02 02:08:00   NaN   NaN   NaN
2005-03-02 02:08:30   7.0   9.0  27.0
2005-03-02 02:09:00   7.0   9.0  27.0
2005-03-02 02:09:30   7.0   9.0  27.0
2005-03-02 02:10:00   8.0  12.0  31.0
2005-03-02 02:10:30   9.0  13.0  33.0
2005-03-02 02:11:00  10.0  14.0  35.0
like image 180
Ashish Acharya Avatar answered Mar 08 '23 23:03

Ashish Acharya


I suggest use:

#if unique values in index use reindex 
df = Data.reindex(pd.date_range(Data.index.min(), Data.index.max(), freq='30S'))
#if non unique values in index
#df = df.resample('30s').mean()

#get mask for NaNs rows
mask = df.isnull().all(axis=1)
#get sum of all columns
s1 = df.sum(axis=1)
#if need sum only A, B columns
#s1 = df[['A', 'B']].sum(axis=1)
#create column for grouping
df['C'] = mask.cumsum()
#filter out NaNs rows
df = df[~mask]
#transform mean and add sum
df['C'] = df.groupby('C')['B'].transform('mean') + s1
print (df)
                        A     B     C
2005-03-02 02:05:30   1.0   3.0   9.0
2005-03-02 02:06:00   2.0   4.0  11.0
2005-03-02 02:06:30   3.0   5.0  13.0
2005-03-02 02:07:00   4.0   6.0  15.0
2005-03-02 02:07:30   5.0   7.0  17.0
2005-03-02 02:08:30   7.0   9.0  27.0
2005-03-02 02:09:00   7.0   9.0  27.0
2005-03-02 02:09:30   7.0   9.0  27.0
2005-03-02 02:10:00   8.0  12.0  31.0
2005-03-02 02:10:30   9.0  13.0  33.0
2005-03-02 02:11:00  10.0  14.0  35.0

Another solution, thank you @iDrwish for suggestion:

First get difference (diff) of index (not yet implemented, so convert index to series first by to_series), compare with 30 s Timedelta and create groups by cumsum.

Last use transform with mean and add sum of columns:

g = Data.index.to_series().diff().gt(pd.Timedelta(30, unit='s')).cumsum()
Data['C'] = Data.groupby(g)['B'].transform('mean') + Data.sum(axis=1)
#if need specify columns
#Data['C'] = Data.groupby(g)['B'].transform('mean') + Data['A'] + Data['B']
print (Data)
                      A   B   C
Date                           
2005-03-02 02:05:30   1   3   9
2005-03-02 02:06:00   2   4  11
2005-03-02 02:06:30   3   5  13
2005-03-02 02:07:00   4   6  15
2005-03-02 02:07:30   5   7  17
2005-03-02 02:08:30   7   9  27
2005-03-02 02:09:00   7   9  27
2005-03-02 02:09:30   7   9  27
2005-03-02 02:10:00   8  12  31
2005-03-02 02:10:30   9  13  33
2005-03-02 02:11:00  10  14  35
like image 39
jezrael Avatar answered Mar 09 '23 00:03

jezrael