A geometric margin is simply the euclidean distance between a certain x (data point) to the hyperlane.
What is the intuitive explanation to what a functional margin is?
Note: I realize that a similar question has been asked here: How to understand the functional margin in SVM ?
However, the answer given there explains the equation, but not its meaning (as I understood it).
"A geometric margin is simply the euclidean distance between a certain x (data point) to the hyperlane. "
I don't think that is a proper definition for the geometric margin, and I believe that is what is confusing you. The geometric margin is just a scaled version of the functional margin.
You can think the functional margin, just as a testing function that will tell you whether a particular point is properly classified or not. And the geometric margin is functional margin scaled by ||w||
If you check the formula:
You can notice that independently of the label, the result would be positive for properly classified points (e.g sig(1*5)=1 and sig(-1*-5)=1) and negative otherwise. If you scale that by ||w|| then you will have the geometric margin.
Why does the geometric margin exists?
Well to maximize the margin you need more that just the sign, you need to have a notion of magnitude, the functional margin would give you a number but without a reference you can't tell if the point is actually far away or close to the decision plane. The geometric margin is telling you not only if the point is properly classified or not, but the magnitude of that distance in term of units of |w|
The functional margin represents the correctness and confidence of the prediction if the magnitude of the vector(w^T) orthogonal to the hyperplane has a constant value all the time.
By correctness, the functional margin should always be positive, since if wx + b is negative, then y is -1 and if wx + b is positive, y is 1. If the functional margin is negative then the sample should be divided into the wrong group.
By confidence, the functional margin can change due to two reasons: 1) the sample(y_i and x_i) changes or 2) the vector(w^T) orthogonal to the hyperplane is scaled (by scaling w and b). If the vector(w^T) orthogonal to the hyperplane remains the same all the time, no matter how large its magnitude is, we can determine how confident the point is grouped into the right side. The larger that functional margin, the more confident we can say the point is classified correctly.
But if the functional margin is defined without keeping the magnitude of the vector(w^T) orthogonal to the hyperplane the same, then we define the geometric margin as mentioned above. The functional margin is normalized by the magnitude of w to get the geometric margin of a training example. In this constraint, the value of the geometric margin results only from the samples and not from the scaling of the vector(w^T) orthogonal to the hyperplane.
The geometric margin is invariant to the rescaling of the parameter, which is the only difference between geometric margin and functional margin.
EDIT:
The introduction of functional margin plays two roles: 1) intuit the maximization of geometric margin and 2) transform the geometric margin maximization issue to the minimization of the magnitude of the vector orthogonal to the hyperplane.
Since scaling the parameters w and b can result in nothing meaningful and the parameters are scaled in the same way as the functional margin, then if we can arbitrarily make the ||w|| to be 1(results in maximizing the geometric margin) we can also rescale the parameters to make them subject to the functional margin being 1(then minimize ||w||).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With