How do I read/write libsvm
data into/from R
?
The libsvm
format is sparse data like
<class/target>[ <attribute number>:<attribute value>]*
(cf. Compressed Row Storage (CRS)) e.g.,
1 10:3.4 123:0.5 34567:0.231
0.2 22:1 456:03
I am sure I can whip some something myself, but I would much rather use something off the shelf. However, R
library foreign
does not seem to provide the necessary functionality.
e1071
is off the shelf:install.packages("e1071")
library(e1071)
read.matrix.csr(...)
write.matrix.csr(...)
Note: it is implemented in R
, not in C
, so it is dog-slow.
It even have a special vignette Support Vector Machines—the Interface to libsvm in package e1071.
r.vw
is bundled with vowpal_wabbit
Note: it is implemented in R
, not in C
, so it is dog-slow.
I have been running a job using the zygmuntz solution on a dataset with 25k observations (rows) for almost 5 hrs now. It has done 3k-ish rows. It was taking so long that I coded this up in the meantime (based on zygmuntz's code):
require(Matrix)
read.libsvm = function( filename ) {
content = readLines( filename )
num_lines = length( content )
tomakemat = cbind(1:num_lines, -1, substr(content,1,1))
# loop over lines
makemat = rbind(tomakemat,
do.call(rbind,
lapply(1:num_lines, function(i){
# split by spaces, remove lines
line = as.vector( strsplit( content[i], ' ' )[[1]])
cbind(i, t(simplify2array(strsplit(line[-1],
':'))))
})))
class(makemat) = "numeric"
#browser()
yx = sparseMatrix(i = makemat[,1],
j = makemat[,2]+2,
x = makemat[,3])
return( yx )
}
This ran in minutes on the same machine (there may have been memory issues with zygmuntz solution too, not sure). Hope this helps anyone with the same problem.
Remember, if you need to do big computations in R, VECTORIZE!
EDIT: fixed an indexing error I found this morning.
I came up with my own ad hoc solution leveraging some data.table
utilities,
It ran in almost no time on the test data set I found (Boston Housing data).
Converting that to a data.table
(orthogonal to solution, but adding here for easy reproducibility):
library(data.table)
x = fread("/media/data_drive/housing.data.fw",
sep = "\n", header = FALSE)
#usually fixed-width conversion is harder, but everything here is numeric
columns = c("CRIM", "ZN", "INDUS", "CHAS",
"NOX", "RM", "AGE", "DIS", "RAD",
"TAX", "PTRATIO", "B", "LSTAT", "MEDV")
DT = with(x, fread(paste(gsub("\\s+", "\t", V1), collapse = "\n"),
header = FALSE, sep = "\t",
col.names = columns))
Here it is:
DT[ , fwrite(as.data.table(paste0(
MEDV, " | ", sapply(transpose(lapply(
names(.SD), function(jj)
paste0(jj, ":", get(jj)))),
paste, collapse = " "))),
"/path/to/output", col.names = FALSE, quote = FALSE),
.SDcols = !"MEDV"]
#what gets sent to as.data.table:
#[1] "24 | CRIM:0.00632 ZN:18 INDUS:2.31 CHAS:0 NOX:0.538 RM:6.575
# AGE:65.2 DIS:4.09 RAD:1 TAX:296 PTRATIO:15.3 B:396.9 LSTAT:4.98 MEDV:24"
#[2] "21.6 | CRIM:0.02731 ZN:0 INDUS:7.07 CHAS:0 NOX:0.469 RM:6.421
# AGE:78.9 DIS:4.9671 RAD:2 TAX:242 PTRATIO:17.8 B:396.9 LSTAT:9.14 MEDV:21.6"
# ...
There may be a better way to get this understood by fwrite
than as.data.table
, but I can't think of one (until setDT
works on vectors).
I replicated this to test its performance on a bigger data set (just blow up the current data set):
DT2 = rbindlist(replicate(1000, DT, simplify = FALSE))
The operation was pretty fast compared to some of the times reported here (I haven't bothered comparing directly yet):
system.time(.)
# user system elapsed
# 8.392 0.000 8.385
I also tested using writeLines
instead of fwrite
, but the latter was better.
I am looking again and seeing it might take a while to figure out what's going on. Maybe the magrittr
-piped version will be easier to follow:
DT[ ,
#1) prepend each column's values with the column name
lapply(names(.SD), function(jj)
paste0(jj, ":", get(jj))) %>%
#2) transpose this list (using data.table's fast tool)
# (was column-wise, now row-wise)
#3) concatenate columns, separated by " "
transpose %>% sapply(paste, collapse = " ") %>%
#4) prepend each row with the target value
# (with Vowpal Wabbit in mind, separate with a pipe)
paste0(MEDV, " | ", .) %>%
#5) convert this to a data.table to use fwrite
as.data.table %>%
#6) fwrite it; exclude nonsense column name,
# and force quotes off
fwrite("/path/to/data",
col.names = FALSE, quote = FALSE),
.SDcols = !"MEDV"]
reading in such files is much easier**
#quickly read data; don't split within lines
x = fread("/path/to/data", sep = "\n", header = FALSE)
#tstrsplit is transpose(strsplit(.))
dt1 = x[ , tstrsplit(V1, split = "[| :]+")]
#even columns have variable names
nms = c("target_name",
unlist(dt1[1L, seq(2L, ncol(dt1), by = 2L),
with = FALSE]))
#odd columns have values
DT = dt1[ , seq(1L, ncol(dt1), by = 2L), with = FALSE]
#add meaningful names
setnames(DT, nms)
**this will not work with "ragged"/sparse input data. I don't think there's a way to extend this to work in such cases.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With