Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Cumulative look-back rolling join

Tags:

r

data.table

xts

I have two data tables. I want to do a rolling join but of a "cummulative kind". For example, here are two tables

d1 <-  data.table(starttime = c("2011-01-01 15:29:50", "2011-01-01 15:30:03", "2011-01-01 15:40:20", "2011-01-01 15:50:20" ,"2011-01-01 16:30:00", "2011-01-01 16:40:00"),
              endtime = c("2011-01-01 15:30:00", "2011-01-01 15:30:15", "2011-01-01 15:40:28", "2011-01-01 15:50:25", "2011-01-01 16:31:00", "2011-01-01 16:41:00"), v = c("A", "B", "B", "A", "B", "A"), m = c(2,3,5,8,9,9), dur = c(10,12,8,5,60,11))

starttime               endtime           v   m    dur
2011-01-01 15:29:50  2011-01-01 15:30:00  A   2    10
2011-01-01 15:30:03  2011-01-01 15:30:15  B   3    12
2011-01-01 15:40:20  2011-01-01 15:40:28  B   5     8
2011-01-01 15:50:20  2011-01-01 15:50:25  A   8     5
2011-01-01 16:30:00  2011-01-01 16:31:00  B   9    60
2011-01-01 16:40:00  2011-01-01 16:41:00  A   9    11

d2 <-  data.table(time = c("2011-01-01 16:39:50", "2011-01-01 16:00:03", "2011-01-01 16:50:50"),
                            v = c("A", "B", "A"),  mk = rnorm(3))

               time   v          mk
2011-01-01 16:00:03   B   -0.2385093
2011-01-01 16:39:50   A   -0.4966836
2011-01-01 16:50:50   A   -0.4566836

Now for first row in d2, consider looking back from d2$time of first row, I want to get sum of m given same d2$v in rows of d1 until the Sum of duration (endtime-starttime) > 15

Is there also a way I can count how many rows I used to add > 15 second?

so basically for i should get similar to this

               time   v       mk       m     rowsUsed 
2011-01-01 16:00:03   B   -0.2385093   8            2
2011-01-01 16:39:50   A   -0.4966836   10           2
2011-01-01 16:50:50   A   -0.4566836   17           2

Can anyone help me how such a rolling join can be constructed? I have lots of rows so speed is a concern. Willing to be flexible with XTS.

like image 586
user2961712 Avatar asked May 18 '17 02:05

user2961712


2 Answers

Here's my version. You can modify this however you like. Let me know if you find it useful.

library("lubridate")
library("data.table")

d1 <-  data.table(starttime = parse_date_time(c("2011-01-01 15:29:50", "2011-01-01 15:30:03", "2011-01-01 15:40:20", "2011-01-01 15:50:20" ,"2011-01-01 16:30:00", "2011-01-01 16:40:00"), orders="ymd HMS"),
                  endtime = parse_date_time(c("2011-01-01 15:30:00", "2011-01-01 15:30:15", "2011-01-01 15:40:28", "2011-01-01 15:50:25", "2011-01-01 16:31:00", "2011-01-01 16:41:00"), orders="ymd HMS"), v = c("A", "B", "B", "A", "B", "A"), m = c(2,3,5,8,9,9), dur = c(10,12,8,5,60,11))

d2 <-  data.table(time = parse_date_time(c("2011-01-01 16:39:50", "2011-01-01 16:00:03", "2011-01-01 16:50:50"), orders="ymd HMS"),
                  v = c("A", "B", "A"),  mk = rnorm(3))

get_m_rows <- function(value,timeValue,threshold){
  d3 <- d1[v==value]
  d3 <- d3[order(endtime,decreasing = TRUE)]
  d3[endtime<timeValue,totalTime:=cumsum(dur)]
  eligibleRows <- d3[endtime<timeValue,.N]
  ifelse(d3[totalTime<=threshold&!is.na(totalTime),.N]>0,rowIndex <- d3[,.I[totalTime<=threshold&!is.na(totalTime)]],rowIndex <- 0)
  ifelse(rowIndex==0,rowIndex<-1,ifelse(length(rowIndex)<eligibleRows,rowIndex<-c(rowIndex,rowIndex[length(rowIndex)]+1),0)) 
  return(d3[rowIndex,.(m=sum(m),.N)])
}

d2[,c("m","rowUsed"):=(get_m_rows(v,time,15)),by=.(v,time)]

# time v         mk  m rowUsed
# 1: 2011-01-01 16:39:50 A -0.2025446 10       2
# 2: 2011-01-01 16:00:03 B  1.2363660  8       2
# 3: 2011-01-01 16:50:50 A  1.0222815 17       2
like image 61
A Gore Avatar answered Oct 20 '22 15:10

A Gore


Try this, I explain the comments, tell me if it's unclear or too slow.

library(data.table)
library(pbapply)

d1 <-  data.table(starttime = c("2011-01-01 15:29:50", "2011-01-01 15:30:03", "2011-01-01 15:40:20", "2011-01-01 15:50:20" ,"2011-01-01 16:30:00", "2011-01-01 16:40:00"),
                  endtime = c("2011-01-01 15:30:00", "2011-01-01 15:30:15", "2011-01-01 15:40:28", "2011-01-01 15:50:25", "2011-01-01 16:31:00", "2011-01-01 16:41:00"), v = c("A", "B", "B", "A", "B", "A"), m = c(2,3,5,8,9,9), dur = c(10,12,8,5,60,11))

d2 <-  data.table(time = c("2011-01-01 16:39:50", "2011-01-01 16:00:03", "2011-01-01 16:50:50"),
                  v = c("A", "B", "A"),  mk = rnorm(3))

d1$endtime <- as.POSIXct(d1$endtime)
d2$time <- as.POSIXct(d2$time)
d1 <- d1[order(d1$endtime,decreasing=TRUE),] # I want the more recent on top

output_list <- pbapply(d2,1,function(row){
  sub_d1 <- subset(d1,endtime <= row["time"] & v == row["v"]) # keep only relevant rows timewise and with correct v
  sub_d1$cumdur <- cumsum(sub_d1$dur) # sum the time to be able to limit ti be able to test this 15 sec limit
  rowsUsed <- nrow(sub_d1) - nrow(subset(sub_d1,cumdur >= 15)) + 1 # check the number of rows I need
  m <- sum(sub_d1$m[1:rowsUsed]) # sum the relevant m
  return(list(m,rowsUsed)) # return as list
  })

d2 <- cbind(d2, matrix(unlist(output_list),ncol=2,byrow=TRUE,dimnames = list(NULL,c("m","rowsUsed"))))

# time v          mk  m rowsUsed
# 1: 2011-01-01 16:39:50 A -0.01884752 10        2
# 2: 2011-01-01 16:00:03 B  0.08545874  8        2
# 3: 2011-01-01 16:50:50 A  1.62738391 17        2
like image 31
Moody_Mudskipper Avatar answered Oct 20 '22 14:10

Moody_Mudskipper