Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Vectorized Lookups of Pandas Series to a Dictionary

Problem Statement:

A pandas dataframe column series, same_group needs to be created from booleans according to the values of two existing columns, row and col. The row needs to show True if both cells across a row have similar values (intersecting values) in a dictionary memberships, and False otherwise (no intersecting values). How do I do this in a vectorized way (not using apply)?

Setup:

import pandas as pd
import numpy as np 
n = np.nan
memberships = {
    'a':['vowel'],
    'b':['consonant'],
    'c':['consonant'],
    'd':['consonant'],
    'e':['vowel'],
    'y':['consonant', 'vowel']
}

congruent = pd.DataFrame.from_dict(  
         {'row': ['a','b','c','d','e','y'],
            'a': [ n, -.8,-.6,-.3, .8, .01],
            'b': [-.8,  n, .5, .7,-.9, .01],
            'c': [-.6, .5,  n, .3, .1, .01],
            'd': [-.3, .7, .3,  n, .2, .01],
            'e': [ .8,-.9, .1, .2,  n, .01],
            'y': [ .01, .01, .01, .01,  .01, n],
       }).set_index('row')
congruent.columns.names = ['col']

snippet of dataframe cs

cs = congruent.stack().to_frame()
cs.columns = ['score']
cs.reset_index(inplace=True)
cs.head(6)

snippet of dataframe cs stacked

The Desired Goal:

finest drawing of added pandas column

How do I accomplish creating this new column based on a lookup on a dictionary?

Note that I'm trying to find intersection, not equivalence. For example, row 4 should have a same_group of 1, since a and y are both vowels (despite that y is "sometimes a vowel" and thus belongs to groups consonant and vowel).

like image 824
Thomas Matthew Avatar asked Jan 12 '17 19:01

Thomas Matthew


2 Answers

# create a series to make it convenient to map
# make each member a set so I can intersect later
lkp = pd.Series(memberships).apply(set)

# get number of rows and columns
# map the sets to column and row indices
n, m = congruent.shape
c = congruent.columns.to_series().map(lkp).values
r = congruent.index.to_series().map(lkp).values

print(c)
[{'vowel'} {'consonant'} {'consonant'} {'consonant'} {'vowel'}
 {'consonant', 'vowel'}]

print(r)
[{'vowel'} {'consonant'} {'consonant'} {'consonant'} {'vowel'}
 {'consonant', 'vowel'}]

# use np.repeat, np.tile, zip to create cartesian product
# this should match index after stacking
# apply set intersection for each pair
# empty sets are False, otherwise True
same = [
    bool(set.intersection(*tup))
    for tup in zip(np.repeat(r, m), np.tile(c, n))
]

# use dropna=False to ensure we maintain the
# cartesian product I was expecting
# then slice with boolean list I created
# and dropna
congruent.stack(dropna=False)[same].dropna()

row  col
a    e      0.80
     y      0.01
b    c      0.50
     d      0.70
     y      0.01
c    b      0.50
     d      0.30
     y      0.01
d    b      0.70
     c      0.30
     y      0.01
e    a      0.80
     y      0.01
y    a      0.01
     b      0.01
     c      0.01
     d      0.01
     e      0.01
dtype: float64

Produce wanted result

congruent.stack(dropna=False).reset_index(name='Score') \
    .assign(same_group=np.array(same).astype(int)).dropna()

enter image description here

like image 130
piRSquared Avatar answered Oct 17 '22 06:10

piRSquared


Idea: let's convert your lists of ['vowel', 'consonant'] to binary [1, 2] and use bitwise operations:

Setup:

In [138]: lkp2 = pd.Series(memberships) \
                   .apply(pd.Series) \
                   .replace({'vowel':1, 'consonant':2}) \
                   .sum(1) \
                   .astype('uint8')

In [139]: lkp2
Out[139]:
a    1  # 'vovel'
b    2  # 'consonant'
c    2  # 'consonant'
d    2  # 'consonant'
e    1  # 'vovel'
y    3  # 1 | 2 = 3 - both bits are set
dtype: uint8

Solution:

In [140]: cs['same_group'] = np.bitwise_and(cs.row.map(lkp2), cs.col.map(lkp2)).ne(0).mul(1)

In [141]: cs
Out[141]:
   row col  score  same_group
0    a   b  -0.80           0
1    a   c  -0.60           0
2    a   d  -0.30           0
3    a   e   0.80           1
4    a   y   0.01           1
5    b   a  -0.80           0
6    b   c   0.50           1
7    b   d   0.70           1
8    b   e  -0.90           0
9    b   y   0.01           1
10   c   a  -0.60           0
11   c   b   0.50           1
12   c   d   0.30           1
13   c   e   0.10           0
14   c   y   0.01           1
15   d   a  -0.30           0
16   d   b   0.70           1
17   d   c   0.30           1
18   d   e   0.20           0
19   d   y   0.01           1
20   e   a   0.80           1
21   e   b  -0.90           0
22   e   c   0.10           0
23   e   d   0.20           0
24   e   y   0.01           1
25   y   a   0.01           1
26   y   b   0.01           1
27   y   c   0.01           1
28   y   d   0.01           1
29   y   e   0.01           1
30   y   y   0.00           1

Timing: against 3.1M rows DF:

In [180]: cs = pd.concat([cs] * 10**5, ignore_index=True)

In [181]: cs.shape
Out[181]: (3100000, 3)

In [182]: %timeit np.bitwise_and(cs.row.map(lkp2), cs.col.map(lkp2)).ne(0).mul(1)
1 loop, best of 3: 466 ms per loop
like image 33
MaxU - stop WAR against UA Avatar answered Oct 17 '22 08:10

MaxU - stop WAR against UA