Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Image augmentation makes performance worse [closed]

I am working on an image segmentation project, and have been trying to adopt the image augmentation technique to increase the training set size. At first, I just try to use the horizontal flip to double the image size, but I found the performance was much worse than not using it. Are there any insight that can be shared. Thanks.

like image 890
user288609 Avatar asked Feb 09 '17 16:02

user288609


2 Answers

So basically you need to answer yourself one important question: Is a flipped image a valid image in your domain?

  1. If not - then it may harm your training process simply because you are providing a network an invalid input which may learn your network spurious patterns in your data. It's not so rare that flips might harm your training - e.g. in logo recognition it's important to not change the orientation of your data in order to learn logos correctly.
  2. If yes - then there might be loads of different reason why your model started to behave worse. One of them might be that it has simply too small capacity and it's not able to learn all the patterns in your data. Second - that you have not enough examples - and when you add the flipped image it turned out that it in fact memoized loads of your traning cases. Another thing is that maybe you learnt it for a too small amount of time and setting the number of iterations to a bigger value might be a good idea.

    One thing is sure - your model is not generalizing well since your flipped data is valid.

like image 94
Marcin Możejko Avatar answered Nov 18 '22 02:11

Marcin Możejko


Image augmentation is a great way to stretch your dataset, but as you've shown, it's not a magic bullet. Image augmentation works (to a degree) by varying image features that are irrelevant to the model's underlying mapping function (i.e. image brightness shouldn't correlate to presence of a dog), while still leaving the objects in the image recognizable.

I think the easiest improvement you could make would be to vary your augmentation techniques. Instead of just flipping images horizontally, try zooming, cropping, rotating, stretching, adjusting brightness, contrast, adding noise, etc. This will vary your original images more than just one mode of augmentation. This blog I wrote for work goes through different types of augmentation and what they do, and this library is how we prefer to implement image augmentation.

Different Augmentation Examples: Shear Shear Augmentation Noise: Noise Augmentation Color Space: Color Space Augmentation

You always run the risk of overfitting your model to your training dataset by relying too much on augmentation to increase its size, but varying your augmentation techniques will help you avoid overfitting quite as much. If you have the resources, nothing works like fresh new data, and if you want to get super fancy, you can look into generative adversarial networks, in which you can basically create new data from scratch.

like image 35
B Cohen Avatar answered Nov 18 '22 01:11

B Cohen