Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

(in R) Why is result of ksvm using user-defined linear kernel different from that of ksvm using "vanilladot"?

I wanted to use user-defined kernel function for Ksvm in R. so, I tried to make a vanilladot kernel and compare with "vanilladot" which is built in "kernlab" as practice.

I write my kernel as follow.

#
###vanilla kernel with class "kernel"
#
kfunction.k <- function(){
   k <- function (x,y){crossprod(x,y)}
   class(k) <- "kernel"
   k}
l<-0.1 ; C<-1/(2*l)

###use kfunction.k
tmp<-ksvm(x,factor(y),scaled=FALSE, type = "C-svc", kernel=kfunction.k(), C = C)
alpha(tmp)[[1]]
ind<-alphaindex(tmp)[[1]]
x.s<-x[ind,] ; y.s<-y[ind]
w.class.k<-t(alpha(tmp)[[1]]*y.s)%*%x.s
w.class.k

I thouhgt result of this operation is eqaul to that of following. However It dosn't.

#
###use "vanilladot"
#
l<-0.1 ; C<-1/(2*l)
tmp1<-ksvm(x,factor(y),scaled=FALSE, type = "C-svc", kernel="vanilladot", C = C)
alpha(tmp1)[[1]]
ind1<-alphaindex(tmp1)[[1]]
x.s<-x[ind1,] ; y.s<-y[ind1]
w.tmp1<-t(alpha(tmp1)[[1]]*y.s)%*%x.s
w.tmp1

I think maybe this problem is related to kernel class. When class is set to "kernel", this problem is occured. However When class is set to "vanillakernel", the result of ksvm using user-defined kernel is equal to that of ksvm using "vanilladot" which is built in Kernlab.

#
###vanilla kernel with class "vanillakernel"
#
kfunction.v.k <- function(){
   k <- function (x,y){crossprod(x,y)}
   class(k) <- "vanillakernel"  
   k}
# The only difference between kfunction.k and kfunction.v.k is "class(k)".
l<-0.1 ; C<-1/(2*l)

###use kfunction.v.k
tmp<-ksvm(x,factor(y),scaled=FALSE, type = "C-svc", kernel=kfunction.v.k(), C = C)
alpha(tmp)[[1]]
ind<-alphaindex(tmp)[[1]]
x.s<-x[ind,] ; y.s<-y[ind]
w.class.v.k<-t(alpha(tmp)[[1]]*y.s)%*%x.s
w.class.v.k

I don't understand why the result is different from "vanilladot", when setting the class to "kernel".

Is there an error in my operation?

like image 630
user2589765 Avatar asked Oct 22 '22 05:10

user2589765


1 Answers

First, it seems like a really good question!

Now to the point. In the sources of ksvm we can find when is a line drawn between using user-defined kernel, and the built-ins:

 if (type(ret) == "spoc-svc") {
            if (!is.null(class.weights)) 
                weightedC <- class.weights[weightlabels] * rep(C, 
                  nclass(ret))
            else weightedC <- rep(C, nclass(ret))
            yd <- sort(y, method = "quick", index.return = TRUE)
            xd <- matrix(x[yd$ix, ], nrow = dim(x)[1])
            count <- 0
            if (ktype == 4) 
                K <- kernelMatrix(kernel, x)
            resv <- .Call("tron_optim", as.double(t(xd)), as.integer(nrow(xd)), 
                as.integer(ncol(xd)), as.double(rep(yd$x - 1, 
                  2)), as.double(K), as.integer(if (sparse) xd@ia else 0), 
                as.integer(if (sparse) xd@ja else 0), as.integer(sparse), 
                as.integer(nclass(ret)), as.integer(count), as.integer(ktype), 
                as.integer(7), as.double(C), as.double(epsilon), 
                as.double(sigma), as.integer(degree), as.double(offset), 
                as.double(C), as.double(2), as.integer(0), as.double(0), 
                as.integer(0), as.double(weightedC), as.double(cache), 
                as.double(tol), as.integer(10), as.integer(shrinking), 
                PACKAGE = "kernlab")
            reind <- sort(yd$ix, method = "quick", index.return = TRUE)$ix
            alpha(ret) <- t(matrix(resv[-(nclass(ret) * nrow(xd) + 
                1)], nclass(ret)))[reind, , drop = FALSE]
            coef(ret) <- lapply(1:nclass(ret), function(x) alpha(ret)[, 
                x][alpha(ret)[, x] != 0])
            names(coef(ret)) <- lev(ret)
            alphaindex(ret) <- lapply(sort(unique(y)), function(x)
which(alpha(ret)[, 
                x] != 0))
            xmatrix(ret) <- x
            obj(ret) <- resv[(nclass(ret) * nrow(xd) + 1)]
            names(alphaindex(ret)) <- lev(ret)
            svindex <- which(rowSums(alpha(ret) != 0) != 0)
            b(ret) <- 0
            param(ret)$C <- C
        }

The important parts are two things, first, if we provide ksvm with our own kernel, then ktype=4 (while for vanillakernel, ktype=0) so it makes two changes:

  • in case of user-defined kernel, the kernel matrix is computed instead of actually using the kernel
  • tron_optim routine is ran with the information regarding the kernel

Now, in the svm.cpp we can find the tron routines, and in the tron_run (called from tron_optim), that LINEAR kernel has a separate optimization routine

if (param->kernel_type == LINEAR)
    {
     /* lots of code here */
     while (Cpj < Cp)
       {
       totaliter += s.Solve(l, prob->x, minus_ones, y, alpha, w, 
                            Cpj, Cnj, param->eps, sii, param->shrinking, 
                            param->qpsize);
     /* lots of code here */
       }
     totaliter += s.Solve(l, prob->x, minus_ones, y, alpha, w, Cp, Cn,
                          param->eps, sii, param->shrinking, param->qpsize);
     delete[] w;
    }
else
{    
    Solver_B s;
    s.Solve(l, BSVC_Q(*prob,*param,y), minus_ones, y, alpha, Cp, Cn, 
    param->eps, sii, param->shrinking, param->qpsize);
}

As you can see, the linear case is treated in the more complex, more detailed way. There is an inner optimization loop calling the solver many times. It would require really deep analysis of actual optimization being performed here, but at this step one can answer your question in a following way:

  • There is no error in your operation
  • kernlab's svm has a separate routine for training SVM with linear kernel, which is based on the type of kernel passed to the code, changing "kernel" to "vanillakernel" made the ksvm think it is actually working with vanillakernel, and so performed this separate optimization routine
  • It does not seem as a bug in fact, as the linear SVM is in fact very different from the kernelized version in terms of efficient optimization techniques. Amount of heuristic as well as numerical issues that has to be taken care of is really big. As a result, some approximations are required and can lead to the different results. While for the rich feature space (like those induced by RBF kernel) it should not really matter, for simple kernels line linear ones - this simplifications can lead to significant output changes.
like image 163
lejlot Avatar answered Oct 24 '22 04:10

lejlot