I have a data frame with items, and for every item a start date and an end date is available. I would like to know how many items are active for every day in a time period.
Example data set:
ItemId <- c(1,2,3)
StartDate <- c(ymd("2014-01-01"),ymd("2014-02-01"),ymd("2014-03-01"))
EndDate <- c(ymd("2014-02-15"),ymd("2014-02-07"),ymd("2014-03-03"))
data.frame(ItemId,StartDate,EndDate)
ItemId StartDate EndDate
1 1 2014-01-01 01:00:00 2014-02-15 01:00:00
2 2 2014-02-01 01:00:00 2014-02-07 01:00:00
3 3 2014-03-01 01:00:00 2014-03-03 01:00:00
The result should look something like this (one entry for every day):
Date ActiveCount
2014-01-01 1
2014-01-02 1
...
2014-02-01 2
...
I have a solution using sqldf, but not sure how to do this in R.
select d.date
, ( select count(ItemID)
from items
where startdate <= d.date
and enddate >= d.date
) activecount
from (select distinct startdate from items
union
select distinct enddate from items
) d
order by 1
(My contains multiple entries for every day, so for sqlite in R this works. On postgresql i can generate a series of dates which is better.)
Thanks.
Calling your data df
:
dates = seq(min(df$StartDate), max(df$EndDate), by = "day")
counts = data.frame(date = dates,
count = sapply(dates, function(x) sum(x <= df$EndDate & x >= df$StartDate)))
Whenever an R task resembles a SQL task, its probably time to bring dplyr
out of the cupboard:
library(dplyr)
ItemId <- c(1,2,3)
StartDate <- c(ymd("2014-01-01"),ymd("2014-02-01"),ymd("2014-03-01"))
EndDate <- c(ymd("2014-02-15"),ymd("2014-02-07"),ymd("2014-03-03"))
jim <- data.frame(ItemId,StartDate,EndDate)
# One technique that's often useful especially in R, is to take your
# iterator, and define it as a variable. You can use that to implement
# a vectorised version of whatever you were thinking of doing.*/
ed <- data.frame(rng = seq(min(jim$StartDate), max(jim$EndDate), by = 'day'))
merge(jim, ed, all=TRUE) %>%
filter(rng >= StartDate, rng <= EndDate) %>%
group_by(rng) %>%
summarise(n())
And that gives you:
rng n()
1 2014-01-01 1
2 2014-01-02 1
3 2014-01-03 1
...
I've come back to this problem a number of times and am constantly looking for the most efficient approach.
I've previously used a map-reduce approach but noticed it doesn't scale well to large dataframes with wide date intervals. I just tried using the interval
class from the lubridate
package and found it to be the fastest implementation so far.
Here's the final code:
library(tidyverse)
library(lubridate)
# Initialize a dataframe with start and end "active" dates per object
N = 1000
id_dates = tibble(id = 1 : N) %>%
mutate(
start = sample(seq(as.Date('2018-1-1'), as.Date('2019-1-1'), by = "day"), size = N, replace = TRUE),
end = start + sample(7 : 100, size = N, replace = TRUE),
interval = interval(start, end))
# Use the %within% command to calculate the number of active items per date
queue_history = tibble(Date = seq(min(id_dates$start), max(id_dates$end), by = "1 day")) %>%
rowwise() %>%
mutate(numInWIP = sum(Date %within% id_dates$interval)) %>%
ungroup()
Here are some benchmarks showing that the lubridate solution is far faster than both the current answer and a map-reduce approach
library(tidyverse)
library(lubridate)
# Initialize a dataframe with start and end "active" dates per object
N = 1000
id_dates = tibble(id = 1 : N) %>%
mutate(
start = sample(seq(as.Date('2018-1-1'), as.Date('2019-1-1'), by = "day"), size = N, replace = TRUE),
end = start + sample(7 : 100, size = N, replace = TRUE),
interval = interval(start, end))
# a map-reduce solution
method_mapreduce = function() {
queue_history = as.tibble(table(reduce(map2(id_dates$start, id_dates$end, seq, by = 1), c)))
queue_history = queue_history %>%
rename(Date = Var1, numInWIP = Freq) %>%
mutate(Date = as_date(Date))
return (queue_history)
}
# a lubridate interval solution
method_intervals = function() {
date_df = tibble(Date = seq(min(id_dates$start), max(id_dates$end), by = "1 day"))
queue_history = date_df %>%
rowwise() %>%
mutate(numInWIP = sum(Date %within% id_dates$interval))
return (queue_history)
}
# current best answer
method_currentsolution = function() {
date_df = tibble(Date = seq(min(id_dates$start), max(id_dates$end), by = "1 day"))
queue_history = merge(id_dates, date_df, all=TRUE) %>%
filter(Date >= start, Date <= end) %>%
group_by(Date) %>%
summarise(n())
}
# Compare with benchmarks
tst = microbenchmark::microbenchmark(
method_mapreduce(),
method_intervals(),
method_currentsolution(),
times = 5)
microbenchmark::autoplot.microbenchmark(tst) +
scale_y_log10(
name = sprintf("Time [%s]", attr(summary(tst), "unit")),
breaks = scales::trans_breaks("log10", function(x) round(10^x)))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With