If I have a pandas DataFrame in Python such as follows:
import numpy as np
import pandas as pd
a = np.random.uniform(0,10,20)
b = np.random.uniform(0,1,20)
data = np.vstack([a,b]).T
df = pd.DataFrame(data)
df.columns = ['A','B']
df.sort_values(by=['A'])
A B
5 0.057519 0.465408
14 1.610972 0.398077
3 1.725556 0.397708
17 1.734124 0.600723
11 1.944105 0.694152
19 3.265799 0.878538
13 3.352460 0.770505
10 3.865299 0.064723
16 4.137863 0.659662
12 5.597172 0.122269
7 5.990105 0.667533
6 6.410582 0.193027
9 6.881429 0.041691
15 7.522877 0.268144
1 8.093155 0.130559
0 8.699004 0.996624
8 8.755095 0.495984
4 9.135271 0.792966
18 9.440045 0.477514
2 9.654226 0.509812
Is it possible to efficiently calculate the mean of column B
values in intervals of column A
?
For example one might want to calculate the mean of values in column B
which fall into the bin ranges [0,1,2,3,4,5,6,7,8,9,10]
of column A
. So for the bin range A = {0-1}
the mean of B
values falling within this bin would be 0.465408
, for the bin range A = {1-2}
the mean of B values falling within this bin would be 0.522665
, etc.
I've found pandas.core.window.Rolling.mean
(see https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.window.Rolling.mean.html) but it appears to calculate the mean values over a window of specified length rather than over bin widths of another column.
Using cut
to segment A
column into bins, and then applying groupby
on these segments and calculating the mean
value of B
:
df.groupby(pd.cut(df['A'], bins=np.arange(11)))['B'].mean()
Output:
A
(0, 1] 0.465408
(1, 2] 0.522665
(2, 3] NaN
(3, 4] 0.571255
(4, 5] 0.659662
(5, 6] 0.394901
(6, 7] 0.117359
(7, 8] 0.268144
(8, 9] 0.541056
(9, 10] 0.593431
Update: you can use agg
to apply a set of different aggregation functions, such as mean
, std
and size
for example:
df.groupby(pd.cut(df['A'], bins=np.arange(11)))['B'].agg(['mean', 'std', 'size'])
Output:
mean std size
A
(0, 1] 0.465408 NaN 1
(1, 2] 0.522665 0.149038 4
(2, 3] NaN NaN 0
(3, 4] 0.571255 0.441983 3
(4, 5] 0.659662 NaN 1
(5, 6] 0.394901 0.385560 2
(6, 7] 0.117359 0.107011 2
(7, 8] 0.268144 NaN 1
(8, 9] 0.541056 0.434788 3
(9, 10] 0.593431 0.173556 3
You could do something like this:
import numpy as np
import pandas as pd
a = np.random.uniform(0,10,20)
b = np.random.uniform(0,1,20)
data = np.vstack([a,b]).T
df = pd.DataFrame(data=data, columns=['A', 'B'])
bins = pd.cut(df['A'], bins=10)
df.groupby(bins)['B'].agg({'B': 'mean'}).reset_index()
You can also provide a list of bins to pd.cut
, e.g. bins=[0,1,2,3,4,5,6,7,8,9,10]
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With