Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Randomly remove duplicated rows using dplyr()

Tags:

r

dplyr

As a follow-up question to this one: Remove duplicated rows using dplyr, I have the following:

How do you randomly remove duplicated rows using dplyr() (among others)?

My command now is:

data.uniques <- distinct(data, KEYVARIABLE, .keep_all = TRUE)

But it returns the first occurrence of the KEYVARIABLE. I want that behaviour to be random: so anywhere between 1 and n occurrences of that KEYVARIABLE.

For instance:

KEYVARIABLE BMI
1 24.2
2 25.3
2 23.2
3 18.9
4 19
4 20.1
5 23.0

Currently my command returns:

KEYVARIABLE BMI
1 24.2
2 25.3
3 18.9
4 19
5 23.0

I want it to randomly return one of the n duplicated rows, for instance:

KEYVARIABLE BMI
1 24.2
2 23.2
3 18.9
4 19
5 23.0
like image 626
Sander W. van der Laan Avatar asked Aug 21 '17 20:08

Sander W. van der Laan


2 Answers

One option would be to group by 'KEYVARIABLE' and then sample the sequence of rows to select the row and Subset the dataset

library(data.table)
setDT(df1)[, .SD[sample(.N)[1]], KEYVARIABLE]

Or using dplyr

library(dplyr)
df1 %>% 
   group_by(KEYVARIABLE) %>%
   sample_n(1)
like image 151
akrun Avatar answered Nov 03 '22 09:11

akrun


Just shuffle rows before selecting first occurrence (using distinct).

library(dplyr)
distinct(df[sample(1:nrow(df)), ], 
         KEYVARIABLE, 
         .keep_all = TRUE)
like image 30
pogibas Avatar answered Nov 03 '22 08:11

pogibas