Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Is there a RAM efficient way to calculate the median over a complement set?

I am looking for an RAM efficient way to calculate the median over a complement set with the help of data.table.

For a set of observations from different groups, I am interested in an implementation of a median of "other groups". I.e., if a have a data.table with one value column and one grouping column, I want for each group calculate the median of values in all other group except the current group. E.g. for group 1 we calculate the median from all values except the values that belong to group 1, and so on.

A concrete example data.table

dt <- data.table(value = c(1,2,3,4,5), groupId = c(1,1,2,2,2))
dt
#    value groupId
# 1:     1       1
# 2:     2       1
# 3:     3       2
# 4:     4       2
# 5:     5       2

I would like the medianOfAllTheOtherGroups to be defined as 1.5 for group 2 and defined as 4 for group 1, repeated for each entry in the same data.table:

dt <- data.table(value = c(1,2,3,4,5), groupId = c(1,1,2,2,2), medianOfAllTheOtherGroups = c(4, 4, 1.5, 1.5, 1.5))

dt
#    value groupId medianOfAllTheOtherGroups
# 1:     1       1                       4.0 # median of all groups _except_ 1
# 2:     2       1                       4.0
# 3:     3       2                       1.5 # median of all groups _except_ 2
# 4:     4       2                       1.5  
# 5:     5       2                       1.5

To calculate the median for each group only once and not for each observation, we went for an implementation with a loop. The current complete implementation works nice for small data.tables as input, but suffers from large RAM consumption for larger data sets a lot with the medians called in a loop as bottleneck (Note: for the real use case we have a dt with 3.000.000 rows and 100.000 groups). I have worked very little with improving RAM consumption. Can an expert help here to improve RAM for the minimal example that I provide below?

MINIMAL EXAMPLE:

library(data.table)
set.seed(1)
numberOfGroups <- 10
numberOfValuesPerGroup <- 100

# Data table with column
# groupIds - Ids for the groups available
# value - value we want to calculate the median over
# includeOnly - boolean that indicates which example should get a "group specific" median
dt <-
  data.table(
    groupId = as.character(rep(1:numberOfGroups, each = numberOfValuesPerGroup)),
    value = round(runif(n = numberOfGroups * numberOfValuesPerGroup), 4)
  )

# calculate the median from all observations for those groups that do not 
# require a separate treatment
medianOfAllGroups <-  median(dt$value)
dt$medianOfAllTheOtherGroups <- medianOfAllGroups


# generate extra data.table to collect results for selected groups
includedGroups <-  dt[, unique(groupId)]
dt_otherGroups <- 
  data.table(groupId = includedGroups,
             medianOfAllTheOtherGroups =  as.numeric(NA)
  )

# loop over all selected groups and calculate the median from all observations
# except of those that belong to this group
for (id in includedGroups){
  dt_otherGroups[groupId == id, 
                 medianOfAllTheOtherGroups := median(dt[groupId != id, value])]
}

# merge subset data to overall data.table
dt[dt_otherGroups, medianOfAllTheOtherGroups := i.medianOfAllTheOtherGroups, 
   on = c("groupId")]

PS: here the example output for 10 groups with 100 observations each:

dt
#      groupId  value medianOfAllTheOtherGroups
#   1:       1 0.2655                   0.48325
#   2:       1 0.3721                   0.48325
#   3:       1 0.5729                   0.48325
#   4:       1 0.9082                   0.48325
#   5:       1 0.2017                   0.48325
# ---
#  996:      10 0.7768                   0.48590
#  997:      10 0.6359                   0.48590
#  998:      10 0.2821                   0.48590
#  999:      10 0.1913                   0.48590
# 1000:      10 0.2655                   0.48590

Some numbers for different settings of the minimal example (tested on a Mac Book Pro with 16Gb RAM):

NumberOfGroups numberOfValuesPerGroup Memory (GB) Runtime (s)
500 50 0.48 1.47
5000 50 39.00 58.00
50 5000 0.42 0.65

All memory values were extracted from the output of profvis, see example screenshot for the smallest example here: profvisoutput

like image 453
Julia Hillmann Avatar asked Mar 11 '21 12:03

Julia Hillmann


People also ask

How do you find the median of a given set?

It is used in many real-life situations. The Median is calculated using the following formula. Suppose you take the simple example, 1, 2, 3, 4, 5. The middle value is 3. We can find it manually since this is a small set of data. If you apply the same set of data in the above formula, n = 5, hence median = (5+1) / 2 = 3.

What are the arguments of the median function in Excel?

Where, number1, number2 … are the arguments for the function. Excel MEDIAN Formula can take numbers, arrays, named ranges, dates, or cell references as input arguments. This function requires at least one argument to provide an output (i.e. number1 is fixed/required argument, the rest all are optional).

Why do we use the median instead of the mean?

The reason behind it is, our mean value might have extremely low or high observations, which may affect the average salary. However, the median is least affected by such extreme observations, due to which we prefer calculating it.

What is the median value?

The Median value is a statistical measure used in many real-life scenarios like real estate median price, bankruptcy value, etc. This is very useful when the data set include very high and low values of grouped and ungrouped data sets. Median is simply the point where 50% of the numbers above & 50% of the numbers below.


Video Answer


2 Answers

The median is the midpoint of a dataset that's been ordered. For an odd number of values in a dataset, the median is simply the middle number. For an even number of values in a dataset, the median is the mean of the two numbers closest to the middle.

To demonstrate, consider the simple vector of 1:8

1 | 2 | 3 |** 4 | 5 **| 6 | 7 | 8

In this case, our midpoint is 4.5. And because this is a very simple example, the median itself is 4.5

Now consider groupings where one grouping is the first value of the vector. That is, our group is only 1. We know that this will shift our median towards the right (i.e. larger) because we removed a low value of the distribution. Our new distribution is 2:8 and the median is now 5.

2 | 3 | 4 | *5* | 6 | 7 | 8

This is only interesting if we can determine a relationship between these shifts. Specifically, our original midpoint was 4.5. Our new midpoint based on the original vector is 5.

Let's demonstrate a larger mixture with a group of 1, 3, and 7. In this case, we have 2 values below the original midpoint and one value above the original midpoint. Our new median is 5:

2 | 4 | ** 5 ** | 6 | 8

So empirically, we have determined that shifting removing smaller numbers from the distribution shifts our mid_point index by 0.5 and removing larger numbers from the distribution shifts our mid_point index by -0.5. There are a few other stipulations:

We need to make sure that our grouping index is not in the new mid_point calculation. Consider a group of 1, 2, and 5. Based on my math, we would shift up by 0.5 based on (2 below - 1 above) / 2 for a new mid_point of 5. That's wrong because 5 was already used up! We need to account for this.

3 | 4 | ** 6 ** | 7 | 8

Likewise, with our shifted mid_point, we also need to look back to verify that our ranking values are still aligned. In a sequence of 1:20, consider a group of c(1:9, 11). While 11 is originally above the original mid_point of 10.5, it is not above the shifted mid_point of (9 below - 1 above ) / 2 14.5. But our actual median would be 15.5 because 11 is now below the new mid_way point.

10 | 12 | 13 | 14 | ** 15 | 16 **| 17 | 18 | 19 | 20

TL:DR what's the code??

All of the examples above, the grouping's rankings vector are given in data.table by the special symbol I assuming we did setorder(). If we do the same math as above, we don't have to waste time subsetting the dataset. We can instead determine what the new index(es) should be based on what's been removed from the distribution.


setorder(dt, value)  

nr = nrow(dt)
is_even = nr %% 2L == 0L
mid_point = (nr + 1L) / 2L

dt[, medianOfAllTheOtherGroups :=
     {
       below = sum(.I < mid_point)
     is_midpoint = is_even && below && (.I[below] + 1L == mid_point)
     
     above = .N - below - is_midpoint
     new_midpoint = (below - above) / 2L + mid_point
     ## TODO turn this into a loop incase there are multiple values that this is true
     if (new_midpoint > mid_point && above &&.I[below + 1] < new_midpoint) { ## check to make sure that none of the indices were above
       below = below - 1L
       new_midpoint = new_midpoint + 1L
     } else if (new_midpoint < mid_point && below && .I[below] > new_midpoint) {
       below = below + 1L
       new_midpoint = new_midpoint - 1L
     }
     if (((nr - .N + 1L) %% 2L) == 0L) {
       dt$value[new_midpoint]
     } else {
       ##TODO turn this into a loop in case there are multiple values that this is true for.
       default_inds = as.integer(new_midpoint + c(-0.5, 0.5))
       if (below) {
         if (.I[below] == default_inds[1L])
           default_inds[1L] = .I[below] - 1L
       }
       if (above) {
         if (.I[below + 1L + is_midpoint] == default_inds[2L])
           default_inds[2L] = .I[below + 1L] + 1L
       }
       mean(dt$value[default_inds])
     }
     }
   , by = groupId]

Performance

This is using bench::mark which checks that all results are equal. FOr Henrik and my solutions, I do reorder the results back to the original grouping so that they are all equal.

Note that while this (complicated) algorithm is most efficient, I do want to emphasize that most of these likely do not extreme peak RAM usage. The other answers have to subset 5,000 times to allocate a vector of length 249,950 to calculate a new median. That's about 2 MB per loop just on allocation (e.g., 10 GB overall).

# A tibble: 6 x 13
  expression            min   median `itr/sec` mem_alloc `gc/sec` n_itr  n_gc total_time result          memory        time    gc      
  <bch:expr>       <bch:tm> <bch:tm>     <dbl> <bch:byt>    <dbl> <int> <dbl>   <bch:tm> <list>          <list>        <list>  <list>  
1 cole              225.7ms  271.8ms    3.68      6.34MB    
2 henrik_smart_med    17.7s    17.7s    0.0564   23.29GB    
3 henrik_base_med      1.6m     1.6m    0.0104   41.91GB    
4 henrik_fmed         55.9s    55.9s    0.0179   32.61GB    
5 christian_lookup    54.7s    54.7s    0.0183   51.39GB    
6 talat_unlist        35.9s    35.9s    0.0279   19.02GB     
Full profile code
library(data.table)
library(collapse)
set.seed(76)
numberOfGroups <- 5000
numberOfValuesPerGroup <- 50

dt <-
  data.table(
    groupId = (rep(1:numberOfGroups, each = numberOfValuesPerGroup)),
    value = round(runif(n = numberOfGroups * numberOfValuesPerGroup, 0, 10), 4)
  )

## this is largely instantaneous.
dt[ , ri := .I]

bench::mark( cole = {
  setorder(dt, value)
  
  nr = nrow(dt)
  is_even = nr %% 2L == 0L
  mid_point = (nr + 1L) / 2L
  
  dt[, medianOfAllTheOtherGroups :=
       {
         below = sum(.I < mid_point)
         is_midpoint = is_even && below && (.I[below] + 1L == mid_point)
         
         above = .N - below - is_midpoint
         new_midpoint = (below - above) / 2L + mid_point
         ## TODO turn this into a loop incase there are multiple values that this is true
         if (new_midpoint > mid_point && above &&.I[below + 1] < new_midpoint) { ## check to make sure that none of the indices were above
           below = below - 1L
           new_midpoint = new_midpoint + 1L
         } else if (new_midpoint < mid_point && below && .I[below] > new_midpoint) {
           below = below + 1L
           new_midpoint = new_midpoint - 1L
         }
         if (((nr - .N + 1L) %% 2L) == 0L) {
           as.numeric(dt$value[new_midpoint])
         } else {
           ##TODO turn this into a loop in case there are multiple values that this is true for.
           default_inds = as.integer(new_midpoint + c(-0.5, 0.5))
           if (below) {
             if (.I[below] == default_inds[1L])
               default_inds[1L] = .I[below] - 1L
           }
           if (above) {
             if (.I[below + 1L + is_midpoint] == default_inds[2L])
               default_inds[2L] = .I[below + 1L] + 1L
           }
           mean(dt$value[default_inds])
         }
       }
     , by = groupId]
  
  setorder(dt, ri)

},
henrik_smart_med = {
  
  # number of rows in original data    
  nr = nrow(dt)
  
  # order by value
  setorder(dt, value)
  
  dt[ , medianOfAllTheOtherGroups := {
    
    # length of "other"
    n = nr - .N
    
    # ripped from median
    half = (n + 1L) %/% 2L
    if (n %% 2L == 1L) dt$value[-.I][half]
    else mean(dt$value[-.I][half + 0L:1L])
    
  }, by = groupId]
  setorder(dt, ri)
},
henrik_base_med = {
  dt[ , med := median(dt$value[-.I]), by = groupId]
},
henrik_fmed = {
  dt[ , med := fmedian(dt$value[-.I]), by = groupId]
}, 
christian_lookup = {
  nrows <- dt[, .N]
  dt_match <- dt[, .(nrows_other = nrows- .N), by = .(groupId_match = groupId)]
  dt_match[, odd := nrows_other %% 2]
  dt_match[, idx1 := ceiling(nrows_other/2)]
  dt_match[, idx2 := ifelse(odd, idx1, idx1+1)]
  
  setkey(dt, value)
  dt_match[, medianOfAllTheOtherGroups := dt[groupId != groupId_match][c(idx1, idx2), sum(value)/2], by = groupId_match]
  dt[dt_match, medianOfAllTheOtherGroups := i.medianOfAllTheOtherGroups, 
     on = c(groupId = "groupId_match")]
},
talat_unlist = {
  d2 = dt[, .(value = list(value)), keyby = .(groupId)]
  setkey(dt, groupId)
  dt[, medianOfAllTheOtherGroups := 
       fmedian(d2[-.GRP, unlist(value, use.names = FALSE, recursive = FALSE)]), 
     by = .(groupId)]  
})
like image 79
Cole Avatar answered Oct 07 '22 09:10

Cole


Disclaimer: For some reason the profiling keeps crashing my session, so unfortunately I have no such results. However, because my alternatives were a bit faster than OP, I thought it could still be worth posting them so that OP may assess their memory use.


Data

# numberOfGroups <- 5000
# numberOfValuesPerGroup <- 50
# dt <- ...as in OP...
d1 = copy(dt)
d1[ , ri := .I] # just to able to restore original order when comparing result with OP
d2 = copy(dt)
d3 = copy(dt)

Explanation

I shamelessly borrow lines 28, 30-32 from median.default to make a stripped-down version of median.

Calculate total number of rows in the original data (nrow(d1)). Order data by 'value' (setorder). By ordering, two instances of sort in the median code can be removed.

For each 'groupID' (by = groupId):

Calculate length of "other" (number of rows in the original data minus number of rows of current group (.N)).

Calculate median, where the input values are d1$value[-.I], i.e. the original values except the indices of the current group; ?.I: "While grouping, it holds for each item in the group, its row location in x".

Code & Timing

system.time({

  # number of rows in original data    
  nr = nrow(d1)

  # order by value
  setorder(d1, value)
  
  d1[ , med := {
    
    # length of "other"
    n = nr - .N
    
    # ripped from median
    half = (n + 1L) %/% 2L
    if (n %% 2L == 1L) d1$value[-.I][half]
    else mean(d1$value[-.I][half + 0L:1L])
    
  }, by = groupId]
})
  
# user  system elapsed 
# 4.08    0.01    4.07

# OP's code on my (old) PC
#  user  system elapsed 
# 84.02    7.26   86.75 

# restore original order & check equality
setorder(d1, ri)
all.equal(dt$medianOfAllTheOtherGroups, d1$med)
# [1] TRUE 

Comparison with base::median & collapse::fmedian

I also tried the "-.I" with base::median and collapse::fmedian, where the latter was about twice as fast as base::median.

system.time(
  d2[ , med := median(d2$value[-.I]), by = groupId]
)
#   user  system elapsed 
#  26.86    0.02   26.85 

library(collapse)
system.time(
  d3[ , med := fmedian(d3$value[-.I]), by = groupId]
)
#   user  system elapsed 
#  16.95    0.00   16.96  

all.equal(dt$medianOfAllTheOtherGroups, d2$med)
# TRUE

all.equal(dt$medianOfAllTheOtherGroups, d3$med)
# TRUE

Thanks a lot to @Cole for helpful comments which improved the performance.

like image 27
Henrik Avatar answered Oct 07 '22 09:10

Henrik