How train_on_batch()
is different from fit()
? What are the cases when we should use train_on_batch()
?
So in the end what train_on_batch returns is the loss, output one mse, and output two mse.
Keras Compile Models Before training the model we need to compile it and define the loss function, optimizers, and metrics for prediction. We compile the model using . compile() method. Optimizer, loss, and metrics are the necessary arguments.
Number of samples per batch. If unspecified, batch_size will default to 32.
Keras can separate a portion of your training data into a validation dataset and evaluate the performance of your model on that validation dataset in each epoch. You can do this by setting the validation_split argument on the fit() function to a percentage of the size of your training dataset.
For this question, it's a simple answer from the primary author:
With
fit_generator
, you can use a generator for the validation data as well. In general, I would recommend usingfit_generator
, but usingtrain_on_batch
works fine too. These methods only exist for the sake of convenience in different use cases, there is no "correct" method.
train_on_batch
allows you to expressly update weights based on a collection of samples you provide, without regard to any fixed batch size. You would use this in cases when that is what you want: to train on an explicit collection of samples. You could use that approach to maintain your own iteration over multiple batches of a traditional training set but allowing fit
or fit_generator
to iterate batches for you is likely simpler.
One case when it might be nice to use train_on_batch
is for updating a pre-trained model on a single new batch of samples. Suppose you've already trained and deployed a model, and sometime later you've received a new set of training samples previously never used. You could use train_on_batch
to directly update the existing model only on those samples. Other methods can do this too, but it is rather explicit to use train_on_batch
for this case.
Apart from special cases like this (either where you have some pedagogical reason to maintain your own cursor across different training batches, or else for some type of semi-online training update on a special batch), it is probably better to just always use fit
(for data that fits in memory) or fit_generator
(for streaming batches of data as a generator).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With