I need to get the frequency of each element in a list when the list is in a pandas data frame columns
In data:
din=pd.DataFrame({'x':[['a','b','c'],['a','e','d', 'c']]})`
x
0 [a, b, c]
1 [a, e, d, c]
Desired Output:
f x
0 2 a
1 1 b
2 2 c
3 1 d
4 1 e
I can expand the list into rows and then perform a group by but this data could be large ( million plus records ) and was wondering if there is a more efficient/direct way.
Thanks
First flatten values of list
s and then count by value_counts
or size
or Counter
:
a = pd.Series([item for sublist in din.x for item in sublist])
Or:
a = pd.Series(np.concatenate(din.x))
df = a.value_counts().sort_index().rename_axis('x').reset_index(name='f')
Or:
df = a.groupby(a).size().rename_axis('x').reset_index(name='f')
from collections import Counter
from itertools import chain
df = pd.Series(Counter(chain(*din.x))).sort_index().rename_axis('x').reset_index(name='f')
print (df)
x f
0 a 2
1 b 1
2 c 2
3 d 1
4 e 1
You can also have an one liner like this:
df = pd.Series(sum([item for item in din.x], [])).value_counts()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With