Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Enumerate columns with same prefix

Assume we have the following simplified data:

df = pd.DataFrame({'A':list('abcd'),
                   'B':list('efgh'),
                   'Data_mean':[1,2,3,4],
                   'Data_std':[5,6,7,8],
                   'Data_corr':[9,10,11,12],
                   'Text_one':['foo', 'bar', 'foobar', 'barfoo'],
                   'Text_two':['bar', 'foo', 'barfoo', 'foobar'],
                   'Text_three':['bar', 'bar', 'barbar', 'foofoo']})

   A  B  Data_mean  Data_std  Data_corr Text_one Text_two Text_three
0  a  e          1         5          9      foo      bar        bar
1  b  f          2         6         10      bar      foo        bar
2  c  g          3         7         11   foobar   barfoo     barbar
3  d  h          4         8         12   barfoo   foobar     foofoo

I want to enumerate columns with the same prefix. In this case the prefixes are Data, Text. So expected output would be:

   A  B  Data_mean1  Data_std2  Data_corr3 Text_one1 Text_two2 Text_three3
0  a  e           1          5           9       foo       bar         bar
1  b  f           2          6          10       bar       foo         bar
2  c  g           3          7          11    foobar    barfoo      barbar
3  d  h           4          8          12    barfoo    foobar      foofoo

Note the enumerated columns.


Attempted solution #1:

def enumerate_cols(dataframe, prefix):
    cols = []
    num = 1
    for col in dataframe.columns:
        if col.startswith(prefix):
            cols.append(col + str(num))
            num += 1
        else:
            cols.append(col)

    return cols
enumerate_cols(df, 'Data')

['A',
 'B',
 'Data_mean1',
 'Data_std2',
 'Data_corr3',
 'Text_one',
 'Text_two',
 'Text_three']

Attempted solution #2:

[c+str(x+1) for x, c in enumerate([col for col in df.columns if col.startswith('Data')])]
['Data_mean1', 'Data_std2', 'Data_corr3']

Question: Is there an easier solution to do this, I also looked at df.filter(like='Data') etc. But that looked also quite far fetched.


XY problem
Just be sure I didn't fall into the XY problem. I want to use pd.wide_to_long, but the stubnames columns need to be suffixed by a number to be able to melt the dataframe.

As quoted from the docs:

With stubnames [‘A’, ‘B’], this function expects to find one or more group of columns with format A-suffix1, A-suffix2,…, B-suffix1, B-suffix2,

pd.wide_to_long(df, stubnames=['Data', 'Text'], i=['A', 'B'], j='grp', sep='_')

This returns an empty dataframe.

like image 999
Erfan Avatar asked Jul 01 '19 17:07

Erfan


4 Answers

The idea is to group columns with the same prefix, and establish a cumcount for them.

Since we need to handle column without a prefix separately, we will need to do this in two steps using GroupBy.cumcount and np.where:

cols = df.columns.str.split('_').str[0].to_series()

df.columns = np.where(
    cols.groupby(level=0).transform('count') > 1, 
    cols.groupby(level=0).cumcount().add(1).astype(str).radd(df.columns), 
    cols
)

df
   A  B  Data_mean1  Data_std2  Data_corr3 Text_one1 Text_two2 Text_three3
0  a  e           1          5           9       foo       bar         bar
1  b  f           2          6          10       bar       foo         bar
2  c  g           3          7          11    foobar    barfoo      barbar
3  d  h           4          8          12    barfoo    foobar      foofoo

A simpler solution would be to set columns you don't want to add a suffix to as the index. Then you can simply do

df.set_index(['A', 'B'], inplace=True)
df.columns = (
    df.columns.str.split('_')
      .str[0]
      .to_series()
      .groupby(level=0)
      .cumcount()
      .add(1)
      .astype(str)
      .radd(df.columns))

df
     Data_mean1  Data_std2  Data_corr3 Text_one1 Text_two2 Text_three3
A B                                                                   
a e           1          5           9       foo       bar         bar
b f           2          6          10       bar       foo         bar
c g           3          7          11    foobar    barfoo      barbar
d h           4          8          12    barfoo    foobar      foofoo
like image 155
cs95 Avatar answered Oct 24 '22 18:10

cs95


You could also use a defaultdict to create a counter for each prefix.

from collections import defaultdict

prefix_starting_location = 2
columns = df.columns[prefix_starting_location:]
prefixes = set(col.split('_')[0] for col in columns)

new_cols = []
dd = defaultdict(int)
for col in columns:
    prefix = col.split('_')[0]
    dd[prefix] += 1
    new_cols.append(col + str(dd[prefix]))
df.columns = df.columns[:prefix_starting_location].tolist() + new_cols
>>> df
   A  B  Data_mean1  Data_std2  Data_corr3 Text_one1 Text_two2 Text_three3
0  a  e           1          5           9       foo       bar         bar
1  b  f           2          6          10       bar       foo         bar
2  c  g           3          7          11    foobar    barfoo      barbar
3  d  h           4          8          12    barfoo    foobar      foofoo
​

If the prefixes are known:

prefixes = ['Data', 'Text']
new_cols = []
dd = defaultdict(int)
for col in df.columns:
    prefix = col.split('_')[0]
    if prefix in prefixes:
        dd[prefix] += 1
        new_cols.append(col + str(dd[prefix]))
    else:
        new_cols.append(col)

If your split character _ is not in any of your data fields:

new_cols = []
dd = defaultdict(int)
for col in df.columns:
    if '_' in col:
        prefix = col.split('_')[0]
        dd[prefix] += 1
        new_cols.append(col + str(dd[prefix]))
    else:
        new_cols.append(col)

df.columns = new_cols
like image 26
Alexander Avatar answered Oct 24 '22 20:10

Alexander


you can use rename such as:

l_word = ['Data','Text']
df = df.rename(columns={ col:col+str(i+1) 
                         for word in l_word 
                         for i, col in enumerate(df.filter(like=word))})
like image 24
Ben.T Avatar answered Oct 24 '22 19:10

Ben.T


Per our conversation, method melt

s=df.melt(['A','B']).assign(x=lambda x : x.groupby(x.variable.str.split('_').str[0]).cumcount(),y=lambda x : x.variable.str.split('_').str[0]) 

# after this the problem became a pivot problem 
pd.crosstab([s.A,s.B,s.x],columns=s.y,values=s.value,aggfunc='sum')
y      Data    Text
A B x              
a e 0     1     foo
    4     5     bar
    8     9     bar
b f 1     2     bar
    5     6     foo
    9    10     bar
c g 2     3  foobar
    6     7  barfoo
    10   11  barbar
d h 3     4  barfoo
    7     8  foobar
    11   12  foofoo
like image 25
BENY Avatar answered Oct 24 '22 18:10

BENY