In Tensorflow Object Detection sample configuration files, all Faster R-CNN configuration files disabled the regularization term as
regularizer {
l2_regularizer {
weight: 0.0
}
}
I feel this not reasonable and very likely to get over fitting. Any explanations for such settings? Thank you.
I have summarized below the steps followed by a Faster R-CNN algorithm to detect objects in an image: Take an input image and pass it to the ConvNet which returns feature maps for the image. Apply Region Proposal Network (RPN) on these feature maps and get object proposals.
Compared to recent two-stage methods, RetinaNet achieves a 2.3 point gap above the top-performing Faster R-CNN model based on Inception-ResNet-v2-TDM.
Results: The mean average precision (MAP) of Faster R-CNN reached 87.69% but YOLO v3 had a significant advantage in detection speed where the frames per second (FPS) was more than eight times than that of Faster R-CNN. This means that YOLO v3 can operate in real time with a high MAP of 80.17%.
The image detection accuracy with SSD algorithm was 76,61%, with Faster R-CNN algorithm the image detection accuracy was 99.52% according to valuation dataset.
"Strong regularization such as maxout or dropout is applied to obtain the best results on this dataset. In this paper, we use no maxout/dropout and just simply impose regularization via deep and thin architectures by design, without distracting from the focus on the difficulties of optimization. But combining with stronger regularization may improve results, which we will study in the future." [He et. al, Deep Residual Learning for Image Recognition]
I think the regularization the authors refer to which is being applied directly within the RESNET architecture comes from the batch norm layers that are sandwiched between every conv layer and every activation. While the authors don't say anything about the use of L2 regularization, their statement about maxout and dropout ought apply. BN layers have the effect of regularizing the network without imposing an explicit penalty, so L2 regularization isn't necessary.
That said, the option is there in case you want to try out stronger regularization.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With