Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Appending columns during groupby-apply operations

Tags:

python

pandas

Context

I have several groups of data (defined by 3 columns w/i the dataframe) and would like perform a linear fit and each group and then append the estimate values (with lower + upper bounds of the fit).

Problem

After performing the operation, I get an error related to the shapes of the final vs original dataframes

Example that demonstrates the problem:

from io import StringIO       # modern python
#from StringIO import StringIO # old python
import numpy
import pandas

def fake_model(group, formula):
    # add the results to the group
    modeled = group.assign(
        fit=numpy.random.normal(size=group.shape[0]),
        ci_lower=numpy.random.normal(size=group.shape[0]),
        ci_upper=numpy.random.normal(size=group.shape[0])
    )

    return modeled

raw_csv = StringIO("""\
location,days,era,chemical,conc
MW-A,2415,modern,"Chem1",5.4
MW-A,7536,modern,"Chem1",0.21
MW-A,7741,modern,"Chem1",0.15
MW-A,2415,modern,"Chem2",33.0
MW-A,2446,modern,"Chem2",0.26
MW-A,3402,modern,"Chem2",0.18
MW-A,3626,modern,"Chem2",0.26
MW-A,7536,modern,"Chem2",0.32
MW-A,7741,modern,"Chem2",0.24
""")

data = pandas.read_csv(raw_csv)

modeled = (
    data.groupby(by=['location', 'era', 'chemical'])
        .apply(fake_model, formula='conc ~ days')
        .reset_index(drop=True)
)

That raises a very long traceback, the crux of which is:

[snip]   
C:\Miniconda3\envs\puente\lib\site-packages\pandas\core\internals.py in construction_error(tot_items, block_shape, axes, e)
   3880         raise e
   3881     raise ValueError("Shape of passed values is {0}, indices imply {1}".format(
-> 3882         passed,implied))
   3883 
   3884 

ValueError: Shape of passed values is (8, 9), indices imply (8, 6)

I understand that I added three columns, hence a shape of (8, 9) vs (8, 6).

What I don't understand is that if I inspect the dataframe subgroup in the slightest way, the above error is not raised:

def fake_model2(group, formula):
    _ = group.name
    return fake_model(group, formula)

modeled = (
    data.groupby(by=['location', 'era', 'chemical'])
        .apply(fake_model2, formula='conc ~ days')
        .reset_index(drop=True)
)

print(modeled)

Which produces:

  location  days     era chemical   conc  ci_lower  ci_upper       fit
0     MW-A  2415  modern    Chem1   5.40 -0.466833 -0.599039 -1.143867
1     MW-A  7536  modern    Chem1   0.21 -1.790619 -0.532233 -1.356336
2     MW-A  7741  modern    Chem1   0.15  1.892256 -0.405768 -0.718673
3     MW-A  2415  modern    Chem2  33.00  0.428811  0.259244 -1.259238
4     MW-A  2446  modern    Chem2   0.26 -1.616517 -0.955750 -0.727216
5     MW-A  3402  modern    Chem2   0.18 -0.300749  0.341106  0.602332
6     MW-A  3626  modern    Chem2   0.26 -0.232240  1.845240  1.340124
7     MW-A  7536  modern    Chem2   0.32 -0.416087 -0.521973 -1.477748
8     MW-A  7741  modern    Chem2   0.24  0.958202  0.634742  0.542667

Question

My work-around feels far too hacky to use in any real-world application. Is there a better way to apply my model and include the best-fit estimates to each group within the larger dataframe?

like image 802
Paul H Avatar asked Sep 24 '22 03:09

Paul H


1 Answers

Yay, a non-hacky workaround exists

In [18]: gr = data.groupby(['location', 'era', 'chemical'], group_keys=False)

In [19]: gr.apply(fake_model, formula='')
Out[19]:
  location  days     era chemical   conc  ci_lower  ci_upper       fit
0     MW-A  2415  modern    Chem1   5.40 -0.105610 -0.056310  1.344210
1     MW-A  7536  modern    Chem1   0.21  0.574092  1.305544  0.411960
2     MW-A  7741  modern    Chem1   0.15 -0.073439  0.140920 -0.679837
3     MW-A  2415  modern    Chem2  33.00  1.959547  0.382794  0.544158
4     MW-A  2446  modern    Chem2   0.26  0.484376  0.400111 -0.450741
5     MW-A  3402  modern    Chem2   0.18 -0.422490  0.323525  0.520716
6     MW-A  3626  modern    Chem2   0.26 -0.093855 -1.487398  0.222687
7     MW-A  7536  modern    Chem2   0.32  0.124983 -0.484532 -1.162127
8     MW-A  7741  modern    Chem2   0.24 -1.622693  0.949825 -1.049279

That actually saves you a .reset_index too :)

group_keys was the culprit behind the error. The maybe bug in pandas come from a regular concat of each group. With group_keys=True thats

[('MW-A', 'modern', 'Chem1'), ('MW-A', 'modern', 'Chem2')]

which pandas wasn't expecting. This smells like a bug in pandas, but haven't dug more to confirm.

like image 72
TomAugspurger Avatar answered Sep 28 '22 04:09

TomAugspurger