Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

pandas GroupBy columns with NaN (missing) values

People also ask

How do I find missing values in a column in Pandas?

You can use the isnull() or isna() method of pandas. DataFrame and Series to check if each element is a missing value or not. isnull() is an alias for isna() , whose usage is the same.

Is null equal to NaN in Pandas?

Operating on Null Values As we have seen, Pandas treats None and NaN as essentially interchangeable for indicating missing or null values.


pandas >= 1.1

From pandas 1.1 you have better control over this behavior, NA values are now allowed in the grouper using dropna=False:

pd.__version__
# '1.1.0.dev0+2004.g8d10bfb6f'

# Example from the docs
df

   a    b  c
0  1  2.0  3
1  1  NaN  4
2  2  1.0  3
3  1  2.0  2

# without NA (the default)
df.groupby('b').sum()

     a  c
b        
1.0  2  3
2.0  2  5
# with NA
df.groupby('b', dropna=False).sum()

     a  c
b        
1.0  2  3
2.0  2  5
NaN  1  4

This is mentioned in the Missing Data section of the docs:

NA groups in GroupBy are automatically excluded. This behavior is consistent with R

One workaround is to use a placeholder before doing the groupby (e.g. -1):

In [11]: df.fillna(-1)
Out[11]: 
   a   b
0  1   4
1  2  -1
2  3   6

In [12]: df.fillna(-1).groupby('b').sum()
Out[12]: 
    a
b    
-1  2
4   1
6   3

That said, this feels pretty awful hack... perhaps there should be an option to include NaN in groupby (see this github issue - which uses the same placeholder hack).

However, as described in another answer, "from pandas 1.1 you have better control over this behavior, NA values are now allowed in the grouper using dropna=False"


Ancient topic, if someone still stumbles over this--another workaround is to convert via .astype(str) to string before grouping. That will conserve the NaN's.

df = pd.DataFrame({'a': ['1', '2', '3'], 'b': ['4', np.NaN, '6']})
df['b'] = df['b'].astype(str)
df.groupby(['b']).sum()
    a
b   
4   1
6   3
nan 2

I am not able to add a comment to M. Kiewisch since I do not have enough reputation points (only have 41 but need more than 50 to comment).

Anyway, just want to point out that M. Kiewisch solution does not work as is and may need more tweaking. Consider for example

>>> df = pd.DataFrame({'a': [1, 2, 3, 5], 'b': [4, np.NaN, 6, 4]})
>>> df
   a    b
0  1  4.0
1  2  NaN
2  3  6.0
3  5  4.0
>>> df.groupby(['b']).sum()
     a
b
4.0  6
6.0  3
>>> df.astype(str).groupby(['b']).sum()
      a
b
4.0  15
6.0   3
nan   2

which shows that for group b=4.0, the corresponding value is 15 instead of 6. Here it is just concatenating 1 and 5 as strings instead of adding it as numbers.


All answers provided thus far result in potentially dangerous behavior as it is quite possible you select a dummy value that is actually part of the dataset. This is increasingly likely as you create groups with many attributes. Simply put, the approach doesn't always generalize well.

A less hacky solve is to use pd.drop_duplicates() to create a unique index of value combinations each with their own ID, and then group on that id. It is more verbose but does get the job done:

def safe_groupby(df, group_cols, agg_dict):
    # set name of group col to unique value
    group_id = 'group_id'
    while group_id in df.columns:
        group_id += 'x'
    # get final order of columns
    agg_col_order = (group_cols + list(agg_dict.keys()))
    # create unique index of grouped values
    group_idx = df[group_cols].drop_duplicates()
    group_idx[group_id] = np.arange(group_idx.shape[0])
    # merge unique index on dataframe
    df = df.merge(group_idx, on=group_cols)
    # group dataframe on group id and aggregate values
    df_agg = df.groupby(group_id, as_index=True)\
               .agg(agg_dict)
    # merge grouped value index to results of aggregation
    df_agg = group_idx.set_index(group_id).join(df_agg)
    # rename index
    df_agg.index.name = None
    # return reordered columns
    return df_agg[agg_col_order]

Note that you can now simply do the following:

data_block = [np.tile([None, 'A'], 3),
              np.repeat(['B', 'C'], 3),
              [1] * (2 * 3)]

col_names = ['col_a', 'col_b', 'value']

test_df = pd.DataFrame(data_block, index=col_names).T

grouped_df = safe_groupby(test_df, ['col_a', 'col_b'],
                          OrderedDict([('value', 'sum')]))

This will return the successful result without having to worry about overwriting real data that is mistaken as a dummy value.


One small point to Andy Hayden's solution – it doesn't work (anymore?) because np.nan == np.nan yields False, so the replace function doesn't actually do anything.

What worked for me was this:

df['b'] = df['b'].apply(lambda x: x if not np.isnan(x) else -1)

(At least that's the behavior for Pandas 0.19.2. Sorry to add it as a different answer, I do not have enough reputation to comment.)