Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Equivalent of R data.table rolling join in Python and PySpark

Does anyone know how to do an R data.table rolling join in PySpark?

Borrowing the example and nice explanation of rolling joins from Ben here;

sales<-data.table(saleID=c("S1","S2","S3","S4","S5"), 
              saleDate=as.Date(c("2014-2-20","2014-5-1","2014-6-15","2014-7- 1","2014-12-31")))

commercials<-data.table(commercialID=c("C1","C2","C3","C4"), 
                    commercialDate=as.Date(c("2014-1-1","2014-4-1","2014-7-1","2014-9-15")))

setkey(sales,"saleDate")
setkey(commercials,"commercialDate")

sales[commercials, roll=TRUE]

Result being;

saleDate saleID commercialID
1: 2014-01-01     NA    C1
2: 2014-04-01     S1    C2
3: 2014-07-01     S4    C3
4: 2014-09-15     S4    C4

Many thanks for the help.

like image 617
tjb305 Avatar asked Dec 24 '22 02:12

tjb305


2 Answers

Rolling join is not join + fillna

First of all a rolling join is not the same as a join and a fillna! That would only be the case if the key of the table that is joined against (in terms of data.table that would be the left table and a right-join) has equivalents in the main table. A data.table rolling join does not require this.

There is no direct equivalent, as far as I know and I searched for quite a while. There is even an issue for it https://github.com/pandas-dev/pandas/issues/7546. However:

Solution in pandas:

There is a solution though in pandas. Let's assume your right data.table is table A and your left data.table is table B.

  1. Sort the table A and and B each by key.
  2. Add a column tag to A which are all 0 and a column tag to B that are all 1.
  3. Delete all columns except the key and tagfrom B (can be omitted, but it is clearer this way) and call the table B'. Keep B as an original - we are going to need it later.
  4. Concatenate A with B' to C and ignore the fact that the rows from B' has many NAs.
  5. Sort C by key.
  6. Make a new cumsum column with C = C.assign(groupNr = np.cumsum(C.tag))
  7. Using filtering (query) on tag get rid of all B'-rows.
  8. Add a running counter column groupNr to the original B (integers from 0 to N-1 or from 1 to N, depending on whether you want forward or backward rolling join).
  9. Join B with C on groupNr.

Programming code

#0. 'date' is the key for the rolling join. It does not have to be a date.
A = pd.DataFrame.from_dict(
    {'date': pd.to_datetime(["2014-3-1", "2014-5-1", "2014-6-1", "2014-7-1", "2014-12-1"]),
     'value': ["a1", "a2", "a3", "a4", "a5"]})
B = pd.DataFrame.from_dict(
    {'date': pd.to_datetime(["2014-1-15", "2014-3-15", "2014-6-15", "2014-8-15", "2014-11-15", "2014-12-15"]),
     'value': ["b1", "b2", "b3", "b4", "b5", "b6"]})

#1. Sort the table A and and B each by key.
A = A.sort_values('date')
B = B.sort_values('date')

#2. Add a column tag to A which are all 0 and a column tag to B that are all 1.
A['tag'] = 0
B['tag'] = 1

#3. Delete all columns except the key and tagfrom B (can be omitted, but it is clearer this way) and call the table B'. Keep B as an original - we are going to need it later.
B_ = B[['date','tag']] # You need two [], because you get a series otherwise.

#4. Concatenate A with B' to C and ignore the fact that the rows from B' has many NAs.
C = pd.concat([A, B_])

#5. Sort C by key.
C = C.sort_values('date')

#6. Make a new cumsum column with C = C.assign(groupNr = np.cumsum(C.tag))
C = C.assign(groupNr = np.cumsum(C.tag))

#7. Using filtering (query) on tag get rid of all B'-rows.
C = C[C.tag == 0]

#8. Add a running counter column groupNr to the original B (integers from 0 to N-1 or from 1 to N, depending on whether you want forward or backward rolling join).
B['groupNr'] = range(len(B)+1)[1:] # B's values are carried forward to A's values
B['groupNr'] = range(len(B))       # B's values are carried backward to A's values

#9. Join B with C on groupNr to D.
D = C.set_index('groupNr').join(B.set_index('groupNr'), lsuffix='_A', rsuffix='_B')
like image 185
Make42 Avatar answered Dec 26 '22 14:12

Make42


I also had a similar problem, solved with pandas.merge_asof.

Here is a quick solution for the exposed case:

sales = pd.DataFrame.from_dict(
    {'saleDate': pd.to_datetime(["2014-02-20","2014-05-01","2014-06-15","2014-07-01","2014-12-31"]),
     'saleID': ["S1","S2","S3","S4","S5"]})
commercials = pd.DataFrame.from_dict(
    {'commercialDate': pd.to_datetime(["2014-01-01","2014-04-01","2014-07-01","2014-09-15"]),
     'commercialID': ["C1","C2","C3","C4"]}

result = pd.merge_asof(commercials,
          sales,
          left_on='commercialDate', 
          right_on='saleDate')

# Ordering for easier comparison
result = result[['commercialDate','saleID','commercialID' ]]

The result is the same as expected:

  commercialDate saleID commercialID
0     2014-01-01    NaN           C1
1     2014-04-01     S1           C2
2     2014-07-01     S4           C3
3     2014-09-15     S4           C4 
like image 32
Samuel Lima Avatar answered Dec 26 '22 14:12

Samuel Lima