Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to use F-score as error function to train neural networks?

I am pretty new to neural networks. I am training a network in tensorflow, but the number of positive examples is much much less than negative examples in my dataset (it is a medical dataset). So, I know that F-score calculated from precision and recall is a good measure of how well the model is trained. I have used error functions like cross-entropy loss or MSE before, but they are all based on accuracy calculation (if I am not wrong). But how do I use this F-score as an error function? Is there a tensorflow function for that? Or I have to create a new one?

Thanks in advance.

like image 608
Arindam Avatar asked Nov 17 '18 18:11

Arindam


People also ask

Can I use F1-score as loss function?

The problem of the F1-score is that it is not differentiable and so we cannot use it as a loss function to compute gradients and update the weights when training the model. The F1-score needs binary predictions (0/1) to be measured.

How can F1 scores be used as metrics keras?

You will get training and validation F1 score after each epoch. By default, f1 score is not part of keras metrics and hence we can't just directly write f1-score in metrics while compiling model and get results. However, Keras provide some other evaluation metrics like accuracy, categorical accuracy etc.

What are the roles of error function in training a neural network?

The simplest and most commonly used error function in neural networks used for regression is the mean square error (MSE). However, the purpose of the present ANN is to significantly reduce the calculation time for a fatigue analysis of the marine type structure.


2 Answers

It appears approaches for optimising directly for these types of metrics have been devised and used successfully, improving scoring and or training times:

https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/77289

https://www.kaggle.com/c/human-protein-atlas-image-classification/discussion/70328

https://www.kaggle.com/rejpalcz/best-loss-function-for-f1-score-metric

One such method involves using the sums of probabilities, in place of counts, for the sets of true positives, false positives, and false negative metrics. For example F-beta loss (the generalisation of F1) can be calculated in with Torch in Python as follows:

def forward(self, y_logits, y_true):
    y_pred = self.sigmoid(y_logits)
    TP = (y_pred * y_true).sum(dim=1)
    FP = ((1 - y_pred) * y_true).sum(dim=1)
    FN = (y_pred * (1 - y_true)).sum(dim=1)
    fbeta = (1 + self.beta**2) * TP / ((1 + self.beta**2) * TP + (self.beta**2) * FN + FP + self.epsilon)
    fbeta = fbeta.clamp(min=self.epsilon, max=1 - self.epsilon)
    return 1 - fbeta.mean()

An alternative method is described in this paper:

https://arxiv.org/abs/1608.04802

The approach taken optimises for a lower bound on the statistic. Other metrics such as AUROC and AUCPR are also discussed. An implementation in TF of such an approach can be found here:

https://github.com/tensorflow/models/tree/master/research/global_objectives

like image 155
Jinglesting Avatar answered Sep 20 '22 23:09

Jinglesting


I think you are confusing model evaluation metrics for classification with training losses.

Accuracy, precision, F-scores etc. are evaluation metrics computed from binary outcomes and binary predictions.

For model training, you need a function that compares a continuous score (your model output) with a binary outcome - like cross-entropy. Ideally, this is calibrated such that it is minimised if the predicted mean matches the population mean (given covariates). These rules are called proper scoring rules, and the cross-entropy is one of them.

Also check the thread is-accuracy-an-improper-scoring-rule-in-a-binary-classification-setting

If you want to weigh positive and negative cases differently, two methods are

  • oversample the minority class and correct predicted probabilities when predicting on new examples. For fancier methods, check the under sampling module of imbalanced-learn to get an overview.
  • use a different proper scoring rule for training loss. This allows to e.g. build in asymmetry in how you treat positive and negative cases while preserving calibration. Here is review of the subject.

I recommend just using simple oversampling in practice.

like image 26
Matthias Ossadnik Avatar answered Sep 18 '22 23:09

Matthias Ossadnik