Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Training dataset with coloured and grayscale images

I am trying to train a cnn model for face gender and age detection. My training set contains facial images both coloured and grayscale. How do I normalize this dataset? Or how do I handle a dataset with a mixture of grayscale and coloured images?

like image 938
AKSHAYAA VAIDYANATHAN Avatar asked Jan 28 '23 21:01

AKSHAYAA VAIDYANATHAN


2 Answers

Keep in mind the network will just attempt to learn the relationship between your labels (gender/age) and you training data, in the way they are presented to the network.

The optimal choice is depending if you expect the model to work on gray-scale or colored images in the future.

If want to to predict on gray-scale image only

You should train on grayscale image only! You can use many approaches to convert the colored images to black and white:

  • simple average of the 3 RGB channels

  • more sophisticated transforms using cylindrical color spaces as HSV,HSL. There you could use one of the channels as you gray. Normally, tthe V channel corresponds better to human perception than the average of RGB https://en.wikipedia.org/wiki/HSL_and_HSV

If you need to predict colored image

Obviously, there is not easy way to reconstruct the colors from a grayscale image. Then you must use color images also during training.

if your model accepts MxNx3 image in input, then it will also accept the grayscale ones, given that you replicate the info on the 3 RGB channels.

You should carefully evaluate the number of examples you have, and compare it to the usual training set sizes required by the model you want to use. If you have enough color images, just do not use the grayscale cases at all. If you don't have enough examples, make sure you have balanced training and test set for gray/colored cases, otherwise your net will learn to classify gray-scale vs colored separately.

Alternatively, you could consider using masking, and replace with a masking values the missing color channels.

Further alternative you could consider: - use a pre-trained CNN for feature extraction e.g. VGGs largely available online, and then fine tune the last layers

To me it feels that age and gender estimation would not be affected largely by presence/absence of color, and it might be that reducing the problem to a gray scale images only will help you to convergence since there will be way less parameters to estimate. You should probably rather consider normalizing you images in terms of pose, orientation, ...

like image 63
00__00__00 Avatar answered May 10 '23 15:05

00__00__00


To train a network you have to ensure same size among all the training images, so convert all to grayscale. To normalize you can subtract the mean of training set from each image. Do the same with validation and testing images. For detailed procedure go through below article: https://becominghuman.ai/image-data-pre-processing-for-neural-networks-498289068258

like image 41
tshree Avatar answered May 10 '23 13:05

tshree