Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

TensorFlow - Stop training when losses reach a defined value

I used the first example here as an example of network.

How to stop the training when the loss reach a fixed value ?

So, for example, I would like to fix a maximum of 3000 epochs and the training will stop when the loss will be under 0.2.

I read this topic but it is not the solution I found.

I would want to stop the training when the loss reach a value, not when there is no improvement like with this function proposed in the precedent topic.

Here is the code:

import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD

# Generate dummy data
import numpy as np
x_train = np.random.random((1000, 20))
y_train = keras.utils.to_categorical(np.random.randint(10, size=(1000, 1)), num_classes=10)
x_test = np.random.random((100, 20))
y_test = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10)

model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
model.add(Dense(64, activation='relu', input_dim=20))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))

sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
              optimizer=sgd,
              metrics=['accuracy'])

model.fit(x_train, y_train,
          epochs=3000,
          batch_size=128)
score = model.evaluate(x_test, y_test, batch_size=128)  
like image 779
Julien Avatar asked Jul 20 '18 10:07

Julien


People also ask

How to stop a TensorFlow model when it reaches a certain value?

You can use some method like this if you would switch to TensorFlow 2.0: You just need to create a callback like that and then add that callback to your model.fit so it becomes something like this: This way, the fitting should stop whenever it reaches down below 0.05 (or whatever value you put on while defining it).

What is the difference between loss_1 and loss_2 in TensorFlow?

In your config file, the loss weight for second_stage_classification_loss_weight and first_stage_objectness_loss_weight are both 1 while the other two are both 2, so the model currently focused on the other two a little more. An extra question about why loss_1 and loss_2 are the same. This can be explained by looking at the tensorflow graph.

What is the goal of supervised training in TensorFlow?

The goal is to learn from paired inputs and outputs so that you can predict the value of an output from an input. Each input of your data, in TensorFlow, is almost always represented by a tensor, and is often a vector. In supervised training, the output (or value you'd like to predict) is also a tensor.

How do I calculate the loss value before training the model?

Before training the model, you can visualize the loss value by plotting the model's predictions in red and the training data in blue: The training loop consists of repeatedly doing three tasks in order: Calculating the loss by comparing the outputs to the output (or label) For this example, you can train the model using gradient descent.


1 Answers

You can use some method like this if you would switch to TensorFlow 2.0:

class haltCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
    if(logs.get('loss') <= 0.05):
        print("\n\n\nReached 0.05 loss value so cancelling training!\n\n\n")
        self.model.stop_training = True

You just need to create a callback like that and then add that callback to your model.fit so it becomes something like this:

model.fit(x_train, y_train,
      epochs=3000,
      batch_size=128,
      callbacks=['trainingStopCallback'])

This way, the fitting should stop whenever it reaches down below 0.05 (or whatever value you put on while defining it).

Also, since it's been a long time you asked this question but it still has no actual answer for using it with TensorFlow 2.0, I updated your code snippet to TensorFlow 2.0 so everyone can now easily find and use it with their new projects.

import tensorflow as tf

# Generate dummy data
import numpy as np


x_train = np.random.random((1000, 20))
y_train = tf.keras.utils.to_categorical(
    np.random.randint(10, size=(1000, 1)), num_classes=10)
x_test = np.random.random((100, 20))
y_test = tf.keras.utils.to_categorical(
    np.random.randint(10, size=(100, 1)), num_classes=10)

model = tf.keras.models.Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
model.add(tf.keras.layers.Dense(64, activation='relu', input_dim=20))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(10, activation='softmax'))


class haltCallback(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
        if(logs.get('loss') <= 0.05):
            print("\n\n\nReached 0.05 loss value so cancelling training!\n\n\n")
            self.model.stop_training = True


trainingStopCallback = haltCallback()

sgd = tf.keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy',
              optimizer=sgd,
              metrics=['accuracy', 'loss'])

model.fit(x_train, y_train,
          epochs=3000,
          batch_size=128,
          callbacks=['trainingStopCallback'])
score = model.evaluate(x_test, y_test, batch_size=128)
like image 176
msteknoadam Avatar answered Oct 22 '22 15:10

msteknoadam