Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Neural Network - Working with a imbalanced dataset

I am working on a Classification problem with 2 labels : 0 and 1. My training dataset is a very imbalanced dataset (and so will be the test set considering my problem).

The proportion of the imbalanced dataset is 1000:4 , with label '0' appearing 250 times more than label '1'. However, I have a lot of training samples : around 23 millions. So I should get around 100 000 samples for the label '1'.

Considering the big number of training samples I have, I didn't consider SVM. I also read about SMOTE for Random Forests. However, I was wondering whether NN could be efficient to handle this kind of imbalanced dataset with a large dataset ?

Also, as I am using Tensorflow to design the model, which characteristics should/could I tune to be able to handle this imbalanced situation ?

Thanks for your help ! Paul


Update :

Considering the number of answers, and that they are quite similar, I will answer all of them here, as a common answer.

1) I tried during this weekend the 1st option, increasing the cost for the positive label. Actually, with less unbalanced proportion (like 1/10, on another dataset), this seems to help a bit to get a better result, or at least to 'bias' the precision/recall scores proportion. However, for my situation, It seems to be very sensitive to the alpha number. With alpha = 250, which is the proportion of the unbalanced dataset, I have a precision of 0.006 and a recall score of 0.83, but the model is predicting way too many 1 that it should be - around 0.50 of label '1' ... With alpha = 100, the model predicts only '0'. I guess I'll have to do some 'tuning' for this alpha parameter :/ I'll take a look at this function from TF too as I did it manually for now : tf.nn.weighted_cross_entropy_with_logitsthat

2) I will try to de-unbalance the dataset but I am afraid that I will lose a lot of info doing that, as I have millions of samples but only ~ 100k positive samples.

3) Using a smaller batch size seems indeed a good idea. I'll try it !

like image 844
Paul Rolin Avatar asked Jul 29 '16 17:07

Paul Rolin


People also ask

Can neural networks handle imbalanced data?

Deep Learning for Imbalanced Classification Given the balanced focus on misclassification errors, most standard neural network algorithms are not well suited to datasets with a severely skewed class distribution. Most of the existing deep learning algorithms do not take the data imbalance problem into consideration.

How does CNN handle imbalanced datasets?

General approaches designed for tackling imbalanced datasets can be split into two types: re-sampling and reweighting. Data re-sampling techniques such as undersampling and oversampling have been used as a solution to tackle the imbalanced dataset problem [31,33,34].

What's the best approach to handle an imbalanced dataset?

A widely adopted and perhaps the most straightforward method for dealing with highly imbalanced datasets is called resampling. It consists of removing samples from the majority class (under-sampling) and/or adding more examples from the minority class (over-sampling).


1 Answers

There are usually two common ways for imbanlanced dataset:

  1. Online sampling as mentioned above. In each iteration you sample a class-balanced batch from the training set.

  2. Re-weight the cost of two classes respectively. You'd want to give the loss on the dominant class a smaller weight. For example this is used in the paper Holistically-Nested Edge Detection

like image 125
ppwwyyxx Avatar answered Oct 15 '22 09:10

ppwwyyxx