I have some data that I need to group into bins. Instead of representing the bins as 0,1,2,3...etc. I would like it to output the mean or median of each bin. Is there a way to do this?
You could speed up shx2's code by computing the statistics for each bin_idx only once.
import numpy as np
x = np.tile(np.array([0.2, 9., 6.4, 3.0, 1.6]), 100000)
bins = np.array([0.0, 1.0, 2.5, 10.0])
def binstats(x, bins):
inds = np.digitize(x, bins)
statistics = []
binnumber = []
seen = set()
for bin_idx in inds:
if bin_idx not in seen:
bin_arr = x[inds==bin_idx]
statistics.append([np.mean(bin_arr), np.median(bin_arr)])
binnumber.append(bin_idx)
seen.add(bin_idx)
return statistics, binnumber
statistics, binnumber = binstats(x, bins)
for (mean, median), bin_idx in zip(statistics, binnumber):
print('{b}: {mean:.2f} {median:.2f}'.format(b=bin_idx, mean=mean, median=median))
yields
1: 0.20 0.20
3: 6.13 6.40
2: 1.60 1.60
By the way, if you have scipy, you could also use scipy.stats.binned_statistic, but the performance is not any better:
import scipy.stats as stats
# This is a hack to return two statistics with one call to binned_statistic. It reduces the precision of the statistics to `float32`.
def onecall():
statistics, bin_edges, binnumber = stats.binned_statistic(
x, values=x, bins=bins,
statistic=lambda grp: (np.array([grp.mean(), np.median(grp)])
.astype('float32').view('float64')))
return statistics.view('float32').reshape(-1, 2)
def twocalls():
means, bin_edges, binnumber = stats.binned_statistic(
x, values=x, statistic='mean', bins=bins)
medians, bin_edges, binnumber = stats.binned_statistic(
x, values=x, statistic='median', bins=bins)
return means, medians
In [284]: %timeit binstats(x, bins)
10 loops, best of 3: 85.6 ms per loop
In [285]: %timeit onecall()
10 loops, best of 3: 86.6 ms per loop
In [286]: %timeit twocalls()
10 loops, best of 3: 150 ms per loop
For comparison purposes, here's how you'd write this sort of thing in pandas, using groupby and pd.cut (similar to np.digitize):
>>> x = np.random.uniform(0, 10, 5*10**5)
>>> bins = np.array([0, 1, 2.5, 10])
>>> s = pd.Series(x)
>>> s.groupby(pd.cut(s, bins)).agg(["median", "mean"])
median mean
(0, 1] 0.500684 0.500641
(1, 2.5] 1.751121 1.751630
(2.5, 10] 6.243822 6.248801
[3 rows x 2 columns]
Performance seems comparable to those of unutbu's numpy solutions (after tweaking them a bit to accept args):
>> %timeit binstats(x, bins)
10 loops, best of 3: 126 ms per loop
>>> %timeit onecall(x, bins)
10 loops, best of 3: 74.8 ms per loop
>>> %timeit twocalls(x, bins)
10 loops, best of 3: 109 ms per loop
>>> %timeit s.groupby(pd.cut(s, bins)).agg(["median", "mean"])
10 loops, best of 3: 72.5 ms per loop
and if you're willing to sacrifice a bit of elegance you can shave some more time:
>>> %timeit s.groupby(np.digitize(x, bins)).agg(["median", "mean"])
10 loops, best of 3: 65.2 ms per loop
but I don't use pandas for the performance, I use it because it makes many common data operations much more convenient.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With