I am trying to develop a neural network to predict timeseries.
As far as I have understood, I am training my neural network with a training set and validate it with a test set.
When I am satisfied with my results, I can use my neural network to predict new values, and the neural network itself is basically just all the weights I have adjusted using my training sets.
Is this correct?
If so, I should only train my network once, and then just use my network (the weights) to predict future values. How do you normally avoid re-computing the entire network? Should I save all the weights in a database or something, so I can always access it without having to train it again?
If my understanding is correct, I can benefit from making the heavy computation on a dedicated computer (e.g. a supercomputer) and then just use my network on a webserver, an iPhone app or something like that, but I don't know how to store it.
To make your Neural Network persistent, you can pickle
it. You would not need to recompute the weights of the trained-pickled network, and all you need do is unpickle the network and use it to make new predictions.
There are libraries like joblib
that can be used for more efficient serialization/pickling.
The question of whether to retrain a NN is not trivial. That depends on what exactly you're using the network for; say Reinforcement learning may require that you retrain with new beliefs. But in some cases, and probably in this, it may be sufficient to use a trained network once and always, or to retrain in a future where you have more field data.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With