I have data with both numeric
and non-numeric
columns like this:
mydt
vnum1 vint1 vfac1 vch1
1: -0.30159484 8 3 E
2: -0.09833430 8 1 D
3: -2.15963282 1 3 D
4: 0.03904374 5 2 B
5: 1.54928970 4 1 C
6: -0.73873654 5 1 A
7: -0.68594479 9 2 B
8: 1.35765612 1 2 E
9: 1.46958351 2 1 B
10: -0.89623979 2 4 E
How can I select only numeric
columns and calculate their mean
using data.table
?
I have tried this:
mydt[ , lapply(.SD, mean), ]
# vnum1 vint1 vfac1 vch1
# 1: -0.046491 4.5 NA NA
# Warning messages:
# 1: In mean.default(X[[3L]], ...) :
# argument is not numeric or logical: returning NA
# 2: In mean.default(X[[4L]], ...) :
# argument is not numeric or logical: returning NA
dput(mydt)
structure(list(vnum1 = c(-0.301594844692861, -0.0983343040483769,
-2.15963282153076, 0.03904374068617, 1.54928969700272, -0.738736535236348,
-0.685944791146016, 1.35765612481877, 1.46958350568506, -0.896239790653183
), vint1 = c(8L, 8L, 1L, 5L, 4L, 5L, 9L, 1L, 2L, 2L), vfac1 = structure(c(3L,
1L, 3L, 2L, 1L, 1L, 2L, 2L, 1L, 4L), .Label = c("1", "2", "3",
"4"), class = "factor"), vch1 = structure(c(5L, 4L, 4L, 2L, 3L,
1L, 2L, 5L, 2L, 5L), .Label = c("A", "B", "C", "D", "E"), class = "factor")), .Names = c("vnum1",
"vint1", "vfac1", "vch1"), class = c("data.table", "data.frame"
), row.names = c(NA, -10L), .internal.selfref = <pointer: 0x991c070>)
I have also tried the following, but it does not work:
mydt[ , lapply(.SD, is.numeric),]
# vnum1 vint1 vfac1 vch1
# 1: TRUE TRUE FALSE FALSE
mydt[,mydt[,lapply(.SD, is.numeric),]]
# vnum1 vint1 vfac1 vch1
# 1: TRUE TRUE FALSE FALSE
mydt[ , mydt[ , lapply(.SD, is.numeric) , ], with = F]
# Error in Math.data.frame(j) :
# non-numeric variable in data frame: vnum1vint1vfac1vch1
mydt[ , c(mydt[ , lapply(.SD, is.numeric)), ], with = F]
# Error: unexpected ')' in "mydt[,c(mydt[,lapply(.SD, is.numeric))"
As suggested by @Arun, I tried following but cannot get a subset:
xx <- mydt[ , lapply(.SD, is.numeric), ]
xx
# vnum1 vint1 vfac1 vch1
# 1: TRUE TRUE FALSE FALSE
mydt[ , lapply(.SD, mean), .SDcols = xx]
# Error in `[.data.table`(mydt, , lapply(.SD, mean), .SDcols = xx) :
# .SDcols should be column numbers or names
As suggested by @David, I tried following but get NULL
values for non-numeric columns. I want to get a subset of mydt so that other columns are not even listed.
mydt[ , lapply(.SD, function(x) if(is.numeric(x)) mean(x))]
# vnum1 vint1 vfac1 vch1
# 1: -0.046491 4.5 NULL NULL
I am missing data.frame:
sapply(mydf, is.numeric)
# vnum1 vint1 vfac1 vch1
# TRUE TRUE FALSE FALSE
mydf[sapply(mydf, is.numeric)]
# vnum1 vint1
#1 -0.30159484 8
#2 -0.09833430 8
#3 -2.15963282 1
#4 0.03904374 5
#5 1.54928970 4
#6 -0.73873654 5
#7 -0.68594479 9
#8 1.35765612 1
#9 1.46958351 2
#10 -0.89623979 2
sapply(mydf[sapply(mydf, is.numeric)], mean)
# vnum1 vint1
#-0.046491 4.500000
OK. Thanks to David's comment, following works:
mydt[, sapply(mydt, is.numeric), with = FALSE][,sapply(.SD, mean),]
# vnum1 vint1
# -0.046491 4.500000
mydt[, sapply(mydt, is.numeric), with = FALSE]
# vnum1 vint1
# 1: -0.30159484 8
# 2: -0.09833430 8
# 3: -2.15963282 1
# 4: 0.03904374 5
# ...
The easiest way to create summary tables in R is to use the describe() and describeBy() functions from the psych library.
The summary table is a visualization that summarizes statistical information about data in table form. The information is based on one data table in TIBCO Spotfire. You can, at any time, choose which measures you want to see (such as mean, median, etc.), as well as the columns on which to base these measures.
R provides a wide range of functions for obtaining summary statistics. One method of obtaining descriptive statistics is to use the sapply( ) function with a specified summary statistic. Possible functions used in sapply include mean, sd, var, min, max, median, range, and quantile.
By searching on SO for .SDcols
, I landed up on this answer, which I think explains quite nicely how to use it.
cols = sapply(mydt, is.numeric)
cols = names(cols)[cols]
mydt[, lapply(.SD, mean), .SDcols = cols]
# vnum1 vint1
# 1: -0.046491 4.5
Doing mydt[, sapply(mydt, is.numeric), with = FALSE]
(note: the "modern" way to do that is mydt[ , .SD, .SDcols = is.numeric]
)is not that efficient because it subsets your data.table with those columns and that makes a (deep) copy - more memory used unnecessarily.
And using colMeans
coerces the data.table into a matrix
, which again is not so memory efficient.
You can use the below format in just one line without having to use sapply
:
mydt[, lapply(.SD, mean), .SDcols = is.numeric]
Also, if you're working with real data, there's a good chance your data has na values. Here's how it'd work in case of NA's:
mydt[, lapply(.SD, function(i) mean(i, na.rm = T)), .SDcols = is.numeric]
I got the same problem, also the code below may help too.
data("mtcars")
mtcars$X1 <- factor(mtcars$gear, levels = c(4,3,5)); str(mtcars) #create an non numeric column X1
my.mean <- function(x){ if(is.numeric(x)) c(mean(x), median(x))}
my.df <- setNames(as.data.frame(unlist(lapply(mtcars, FUN = my.mean))), "values"); my.df
my.df$names <- rep(c("mean","median"), times = length(row.names(my.df))/2); my.df
my.df$variables <- rownames(my.df); my.df
library(stringr)
my.df$variables <- str_remove(my.df$variables, "[12]"); my.df
data_wide <- spread(my.df, names, values)
data_wide
> data_wide
variables mean median
1 am 0.406250 0.000
2 carb 2.812500 2.000
3 cyl 6.187500 6.000
4 disp 230.721875 196.300
5 drat 3.596563 3.695
6 gear 3.687500 4.000
7 hp 146.687500 123.000
8 mpg 20.090625 19.200
9 qsec 17.848750 17.710
10 vs 0.437500 0.000
11 wt 3.217250 3.325
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With