Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Unbalanced data and weighted cross entropy

I'm trying to train a network with an unbalanced data. I have A (198 samples), B (436 samples), C (710 samples), D (272 samples) and I have read about the "weighted_cross_entropy_with_logits" but all the examples I found are for binary classification so I'm not very confident in how to set those weights.

Total samples: 1616

A_weight: 198/1616 = 0.12?

The idea behind, if I understood, is to penalize the errors of the majority class and value more positively the hits in the minority one, right?

My piece of code:

weights = tf.constant([0.12, 0.26, 0.43, 0.17]) cost = tf.reduce_mean(tf.nn.weighted_cross_entropy_with_logits(logits=pred, targets=y, pos_weight=weights)) 

I have read this one and others examples with binary classification but still not very clear.

Thanks in advance.

like image 976
Sergiodiaz53 Avatar asked Jun 15 '17 06:06

Sergiodiaz53


People also ask

What is weighted cross entropy?

The weighted Cross-Entropy loss function is used to solve the problem that the accuracy of the deep learning model overfitting on the test set due to the imbalance of the convergence speed of the loss function decreases.

Which algorithm is best for unbalanced data?

Results from the Logistic Regression Algorithm In supervised learning, a common strategy to overcome the class imbalance problem is to resample the original training dataset to decrease the overall level of class imbalance.

What is the problem with unbalanced data?

It is a problem typically because data is hard or expensive to collect and we often collect and work with a lot less data than we might prefer. As such, this can dramatically impact our ability to gain a large enough or representative sample of examples from the minority class.

What is the difference between cross entropy and binary cross entropy?

Binary cross-entropy is for binary classification and categorical cross-entropy is for multi-class classification, but both work for binary classification, for categorical cross-entropy you need to change data to categorical(one-hot encoding).


2 Answers

Note that weighted_cross_entropy_with_logits is the weighted variant of sigmoid_cross_entropy_with_logits. Sigmoid cross entropy is typically used for binary classification. Yes, it can handle multiple labels, but sigmoid cross entropy basically makes a (binary) decision on each of them -- for example, for a face recognition net, those (not mutually exclusive) labels could be "Does the subject wear glasses?", "Is the subject female?", etc.

In binary classification(s), each output channel corresponds to a binary (soft) decision. Therefore, the weighting needs to happen within the computation of the loss. This is what weighted_cross_entropy_with_logits does, by weighting one term of the cross-entropy over the other.

In mutually exclusive multilabel classification, we use softmax_cross_entropy_with_logits, which behaves differently: each output channel corresponds to the score of a class candidate. The decision comes after, by comparing the respective outputs of each channel.

Weighting in before the final decision is therefore a simple matter of modifying the scores before comparing them, typically by multiplication with weights. For example, for a ternary classification task,

# your class weights class_weights = tf.constant([[1.0, 2.0, 3.0]]) # deduce weights for batch samples based on their true label weights = tf.reduce_sum(class_weights * onehot_labels, axis=1) # compute your (unweighted) softmax cross entropy loss unweighted_losses = tf.nn.softmax_cross_entropy_with_logits(onehot_labels, logits) # apply the weights, relying on broadcasting of the multiplication weighted_losses = unweighted_losses * weights # reduce the result to get your final loss loss = tf.reduce_mean(weighted_losses) 

You could also rely on tf.losses.softmax_cross_entropy to handle the last three steps.

In your case, where you need to tackle data imbalance, the class weights could indeed be inversely proportional to their frequency in your train data. Normalizing them so that they sum up to one or to the number of classes also makes sense.

Note that in the above, we penalized the loss based on the true label of the samples. We could also have penalized the loss based on the estimated labels by simply defining

weights = class_weights 

and the rest of the code need not change thanks to broadcasting magic.

In the general case, you would want weights that depend on the kind of error you make. In other words, for each pair of labels X and Y, you could choose how to penalize choosing label X when the true label is Y. You end up with a whole prior weight matrix, which results in weights above being a full (num_samples, num_classes) tensor. This goes a bit beyond what you want, but it might be useful to know nonetheless that only your definition of the weight tensor need to change in the code above.

like image 170
P-Gn Avatar answered Oct 19 '22 23:10

P-Gn


See this answer for an alternate solution which works with sparse_softmax_cross_entropy:

import  tensorflow as tf import numpy as np  np.random.seed(123) sess = tf.InteractiveSession()  # let's say we have the logits and labels of a batch of size 6 with 5 classes logits = tf.constant(np.random.randint(0, 10, 30).reshape(6, 5), dtype=tf.float32) labels = tf.constant(np.random.randint(0, 5, 6), dtype=tf.int32)  # specify some class weightings class_weights = tf.constant([0.3, 0.1, 0.2, 0.3, 0.1])  # specify the weights for each sample in the batch (without having to compute the onehot label matrix) weights = tf.gather(class_weights, labels)  # compute the loss tf.losses.sparse_softmax_cross_entropy(labels, logits, weights).eval() 
like image 35
DankMasterDan Avatar answered Oct 20 '22 01:10

DankMasterDan