I have a DataFrame generated from a call to Pandas.io.json.json_normalize(). Here is an example:
dfIn = pd.DataFrame({'seed':[1324367672,1324367672,1324367673,1324367673,1324367674,1324367674], 'lanePolicy':[True,False,True,False,True,False,],
'stepsPerTrip':[40,37,93,72,23,70], 'density':[51,51,208,208,149,149]})
seed lanePolicy stepsPerTrip density
0 1324367672 True 40 51
1 1324367672 False 37 51
2 1324367673 True 93 208
3 1324367673 False 72 208
4 1324367674 True 23 149
5 1324367674 False 70 149
Note that there are pairs of matching values in dfIn['seed'] with one True and one False value in dfIn['lanePolicy']. It is also the case that if dfIn['seed'] matches for two given rows, dfIn['densitiy'] will match as well. I would like to compute a table similar to the following:
dfDesired = pd.DataFrame({'seed':[1324367672,1324367673,1324367674],
'stepsTrue':[40,93,23], 'stepsFalse':[37,72,70], 'stepsDiff':[3, 21, -47], 'density':[51,208,149]})
seed stepsTrue stepsFalse stepsDiff density
0 1324367672 40 37 3 51
1 1324367673 93 72 21 208
2 1324367674 23 70 -47 149
In particular, I'm looking for the values in dfDesired['stepsDiff'], which are the differences between dfIn['stepsPerTrip'] for the associated False and True values of dfIn['lanePolicy'] for each pair of matching dfIn['seed']. Note also that dfDesired should have half the number of rows as dfIn.
I am able to calculate the values of that single column with:
dfDiff = dfIn.loc[dfIn['lanePolicy'] == True]['stepsPerTrip'].reset_index()['stepsPerTrip'] - dfIn.loc[dfIn['lanePolicy'] == False]['stepsPerTrip'].reset_index()['stepsPerTrip']
0 3
1 21
2 -47
Name: stepsPerTrip, dtype: int64
However, I would like to make a new DataFrame that preserves the other columns. I have also tried the following, but get an incorrect result:
dfDesired = dfIn.groupby('seed').apply(lambda x:x.loc[x['lanePolicy']==True]['stepsPerTrip']-x.loc[x['lanePolicy']==False]['stepsPerTrip'])
seed
1324367672 0 NaN
1 NaN
1324367673 2 NaN
3 NaN
1324367674 4 NaN
5 NaN
Name: stepsPerTrip, dtype: float64
Thank you, in advance.
Use DataFrame.pivot, subtract columns by Series.sub and for density column add Series with seed without duplicated with DataFrame.drop_duplicates:
df = dfIn.pivot('seed','lanePolicy','stepsPerTrip').add_prefix('steps')
df['stepsDiff'] = df['stepsTrue'].sub(df['stepsFalse'])
df['density'] = dfIn.drop_duplicates('seed').set_index('seed')['density']
df = df.reset_index().rename_axis(None, axis=1)
print (df)
seed stepsFalse stepsTrue stepsDiff density
0 1324367672 37 40 3 51
1 1324367673 72 93 21 208
2 1324367674 70 23 -47 149
Another solution is with DataFrame.pivot_table and default aggregate function mean if duplicates in columns seed, 'density' and lanePolicy:
df = (dfIn.pivot_table(index=['seed','density'], columns='lanePolicy',values='stepsPerTrip')
.add_prefix('steps'))
df['stepsDiff'] = df['stepsTrue'].sub(df['stepsFalse'])
df = df.reset_index().rename_axis(None, axis=1)
print (df)
seed density stepsFalse stepsTrue stepsDiff
0 1324367672 51 37 40 3
1 1324367673 208 72 93 21
2 1324367674 149 70 23 -47
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With