Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

svm for binary data with hamming distance

I have a standard {-1,+1} machine learning problem. The main difference is that the data points are binary strings, so their prooximity is measured by Hamming distance. Can SVM be applied in this case? What SVM library is suited better for this task?

like image 920
Allllex Avatar asked Apr 05 '11 11:04

Allllex


3 Answers

If a kernel k is positive definite for any pair of examples x and z the determinant of the gram matrix is non negative.

|k(x, x) k(x, z)|
|               | = k(x,x)k(z,z) - k(x,z)^2 >= 0
|k(z, x) k(z, z)|

For a distance (hamming distance included) the following properties hold:

For any x, y:

1) d(x, z) >= 0 and d(x, z) = 0 <=> x = z
2) symmetry d(x, z) = d(z, x)
3) triangular inequality d(x, z) <= d(x, y) + d(y, z)

Considering k to be the hamming distance, according to 1) we would have:

a) k(x,x) = k(z,z) = 0

But in order to be a positive definite kernel we need:

b) k(x,x)k(z,z) - k(x,z)^2 >= 0

applying a) to b) we have:

-k(x,z)^2 >= 0
k(x,z)^2 <= 0

which means that k(x,z) is not a real value and thus it is not a valid kernel.

Unless I'm missing something, I think it is a valid kernel, because it is an inner product in the following space: K("aab","baa") = [0,1,0,1,1,0] \dot [1,0,0,1,0,1].

This is a nice way to define a feature for a kernel, but it is not the hamming distance. The hamming distance between "aab" and "baa" is 2 the first and the third character are different. but

[0,1,0,1,1,0] \dot [1,0,0,1,0,1] = 1.

If the hamming instance is not positive definite it doesn't mean that it can't be used with SVM, but for sure you loose the benefits of solving a convex optimization problem.

like image 142
user2348478 Avatar answered Oct 10 '22 03:10

user2348478


This is probably best handled by using a SVM library that allows you to create a custom kernel function (e.g. libSVM, SVMLight, scikits). Then you would have to write a Hamming distance function to compute the distance between two strings and plug it in as the kernel function.

The only problem is, I'm not sure Hamming distance actually is a kernel, as in it satisfies Mercer's conditions. It's obviously symmetric, but I don't know whether it's positive definite.

like image 31
Stompchicken Avatar answered Oct 10 '22 03:10

Stompchicken


This paper proposes a kernel for Hamming distances measured between categorical features. It's simply a matter of replacing the Euclidean distance in the standard exponential kernel with Hamming.

enter image description here

It is also possible to combine Euclidean and Hamming distances into one kernel, which would be good for datasets with a mix of continuous and discrete variables.

enter image description here

The good news is that they also prove that this kernel is indeed positive definite (on page 14).

like image 45
Sean Saito Avatar answered Oct 10 '22 03:10

Sean Saito