I am looking into support vector machines and I am wondering what the difference between the decision boundary and the optimal hyperplane is? They both seem to be described as the line drawn to separate the datapoints.
hyperplane and decision boundary are equivalent at small dimension space, 'plane' has the meaning of straight, so it is a line or a plane that separate the data sets.
A plane is a two-dimensional doubly ruled surface spanned by two linearly independent vectors. The generalization of the plane to higher dimensions is called a hyperplane. The angle between two intersecting planes is known as the dihedral angle.
The goal of the SVM algorithm is to create the best line or decision boundary that can segregate n-dimensional space into classes so that we can easily put the new data point in the correct category in the future. This best decision boundary is called a hyperplane.
A decision boundary is the region of a problem space in which the output label of a classifier is ambiguous. If the decision surface is a hyperplane, then the classification problem is linear, and the classes are linearly separable. Decision boundaries are not always clear cut.
The decision boundary for a linear support vector machine is an (affine) hyperplane.
For non-linear kernel support vector machines, the decision boundary of the support vector machine is not an hyperplane in the original feature space but a non-linear hypersurface (a surface of dimension n_features - 1
) whose shape depends on the type of kernel.
However, the kernel function can be interpreted as inducing a non-linear mapping from the original feature space to some kernel space. In the kernel space then the decision function of the SVM is an hyperplane. Here is a video that gives an intuitive descriptions of the relation between the two for the polynomial kernel.
A decision boundary is a hypersurface that partitions the underlying vector space into two sets, one for each class. A general hypersurface in a small dimension space is turned into a hyperplane in a space with much larger dimensions.
Hyperplane and decision boundary are equivalent at small dimension space, 'plane' has the meaning of straight and flat, so it is a line or a plane that separate the data sets. When you do a non-linear operation to map your data to a new feature space, the decision boundary is still a hyperplane in that space, but is not a plane any more at the original space.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With