I've been researching on AdaBoost and GentleBoost classifiers, but can't seem to find a clear answer to the question:
I've been told that AdaBoost is good for things with soft edges, like facial recognition, while GentleBoost is good for things with harder and more symmetrical features and edges, like vehicles. Is this true? Is there any proof for this or any evidence to back up this claim?
GentleBoost is a variant of AdaBoost classifier, from my memory.
For instance, Adaboost are likely to be able to detect things that have hard objects like you have mentioned for GentleBoost, and I too have test Adaboost on stuff like cans and banana, which too works.
While I have never work with GentleBoost before, or rather try it out, according to papers, chances are the computational speed taken to calculate the objects with little features or hard objects as you called it, like bananas, cans, etc are likely to be a lot faster.
You can read more about this here: AdaBoost, the Gentleboost while is only a short and small part in this wiki, it should be more or less able to clarify it up.
Mathematically speaking the main key difference would be the loss function being used.
For GentleBoost, update is fm(x) = P(y=1 | x) – P(y=0 | x).
While for AdaBoost, update is:
If I am not wrong, GentleBoost should be less sensitive to noisy data in addition to be faster(where faster is an assumption looking at the mathematical side) than AdaBoost, however in terms of accuracy, I have never played around with it, so I can't be sure.
Hope that helped you somehow(:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With