What is the best way to perform hyperparameter optimization for a Pytorch model? Implement e.g. Random Search myself? Use Skicit Learn? Or is there anything else I am not aware of?
Hyperparameter tuning can make the difference between an average model and a highly accurate one. Often simple things like choosing a different learning rate or changing a network layer size can have a dramatic impact on your model performance.
Optuna uses TPE to search more efficiently than a random search, by choosing points closer to previous good results. To run the trials, create a study object to set the direction of optimization ( maximize or minimize ). Then, run the study object with study. optimize(objective, n_trials=100) to do one hundred trials.
Recall that hyperparameter tuning is difficult because we cannot write down the actual mathematical formula for the function we're optimizing.
Many researchers use RayTune. It's a scalable hyperparameter tuning framework, specifically for deep learning. You can easily use it with any deep learning framework (2 lines of code below), and it provides most state-of-the-art algorithms, including HyperBand, Population-based Training, Bayesian Optimization, and BOHB.
import torch.optim as optim from ray import tune from ray.tune.examples.mnist_pytorch import get_data_loaders, ConvNet, train, test def train_mnist(config): train_loader, test_loader = get_data_loaders() model = ConvNet() optimizer = optim.SGD(model.parameters(), lr=config["lr"]) for i in range(10): train(model, optimizer, train_loader) acc = test(model, test_loader) tune.report(mean_accuracy=acc) analysis = tune.run( train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])}) print("Best config: ", analysis.get_best_config(metric="mean_accuracy")) # Get a dataframe for analyzing trial results. df = analysis.dataframe()
[Disclaimer: I contribute actively to this project!]
What I found is following:
More young projects:
UPDATE something new:
Ax: Adaptive Experimentation Platform by facebook
BoTorch: Bayesian Optimization in PyTorch
Also, I found a useful table at post by @Richard Liaw:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With