Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

What are some reasons Bayesian Optimization might not work for a CNN

I tried to apply Bayesian Optimisation to a simple CNN for the MNIST handwritten digits dataset and I'm getting little indication that it works. I've tried doing k-fold validation to smooth out noise but still doesn't seem like the optimisation is making any headway in converging towards optimal parameters. In general, what are some of the main reasons Bayesian Optimization might fail? And in my particular case?

The rest of this is just context and code snippets.

Model definition:

def define_model(learning_rate, momentum):
    model = Sequential()
    model.add(Conv2D(32, (3,3), activation = 'relu', kernel_initializer = 'he_uniform', input_shape=(28,28,1)))
    model.add(MaxPooling2D((2,2)))
    model.add(Flatten())
    model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
    model.add(Dense(10, activation='softmax'))
    opt = SGD(lr=learning_rate, momentum=momentum)
    model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
    return model

One training run with hyperparameters: batch_size = 32, learning rate = 1e-2, momentum = 0.9, 10 epochs. (blue = training, yellow = validation).

enter image description here

Box and whisker plot for accuracy in 5-fold cross validation, with the same hyperparameters as above (to get a sense of the spread)

enter image description here

Grid search holding batch_size at 32, and keeping with 10 epoch. I did this on single evaluations rather than 5-fold as the spread wasn't large enough to spoil the results.

enter image description here enter image description here

Bayesian optimization. As above, batch_size=32 and 10 epoch. Searching over the same ranges. But this time with 5-fold cross-validation to smooth out noise. It's supposed to go out to 100 iterations but that's still another 20 hours away.

space = {'lr': hp.loguniform('lr', np.log(np.sqrt(10)*1e-4), np.log(1e-1)), 'momentum': 1 - hp.loguniform('momentum', np.log(np.sqrt(10)*1e-3), np.log(np.sqrt(10)*1e-1))}
tpe_best = fmin(fn=objective, space=space, algo=tpe.suggest, trials=Trials(), max_evals=100)

enter image description here

The trialled learning rates enter image description here

The trialled momentums enter image description here

It looked nice from about iteration 27 to 49 but then it lost its mind again.

EDIT

More detail for those who asked.

Imports

# basic utility libraries
import numpy as np
import pandas as pd
import time
import datetime
import pickle
from matplotlib import pyplot as plt
%matplotlib notebook

# keras
from keras.datasets import mnist
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Input, BatchNormalization
from keras.optimizers import SGD
from keras.callbacks import Callback
from keras.models import load_model

# learning and optimisation helper libraries
from sklearn.model_selection import KFold
from hyperopt import fmin, tpe, Trials, hp, rand
from hyperopt.pyll.stochastic import sample

Single evaluation

def evaluate_model(trainX, trainY, testX, testY, max_epochs, learning_rate, momentum, batch_size, model=None, callbacks=[]):
    if model == None:
        model = define_model(learning_rate, momentum)
    history = model.fit(trainX, trainY, epochs=max_epochs, batch_size=batch_size, validation_data=(testX, testY), verbose=0, callbacks = callbacks)
    return model, history

Cross validation

def evaluate_model_cross_validation(trainX, trainY, max_epochs, learning_rate, momentum, batch_size, n_folds=5):
    scores, histories = list(), list()
    # prepare cross validation
    kfold = KFold(n_folds, shuffle=True, random_state=1)
    # enumerate splits
    for trainFold_ix, testFold_ix in kfold.split(trainX):
        # select rows for train and test
        trainFoldsX, trainFoldsY, testFoldX, testFoldY = trainX[trainFold_ix], trainY[trainFold_ix], trainX[testFold_ix], trainY[testFold_ix]
        # fit model
        model = define_model(learning_rate, momentum)
        history = model.fit(trainFoldsX, trainFoldsY, epochs=max_epochs, batch_size=batch_size, validation_data=(testFoldX, testFoldY), verbose=0)
        # evaluate model
        _, acc = model.evaluate(testFoldX, testFoldY, verbose=0)
        # stores scores
        scores.append(acc)
        histories.append(history)
    return scores, histories

How I set things up for Bayesian Optimisation (or random search)

def selective_search(kind, space, max_evals, batch_size=32):

    trainX, trainY, testX, testY = prep_data()

    histories = list()
    hyperparameter_sets = list()
    scores = list()

    def objective(params):
        lr, momentum = params['lr'], params['momentum']
        accuracies, _ = evaluate_model_cross_validation(trainX, trainY, max_epochs=10, learning_rate=lr, momentum=momentum, batch_size=batch_size, n_folds=5)
        score = np.log10(1 - np.mean(accuracies))
        scores.append(score)
        with open('{}_scores.pickle'.format(kind), 'wb') as file:
            pickle.dump(scores, file)
        hyperparameter_sets.append({'learning_rate': lr, 'momentum': momentum, 'batch_size': batch_size})
        with open('{}_hpsets.pickle'.format(kind), 'wb') as file:
            pickle.dump(hyperparameter_sets, file)
        return score

    if kind == 'bayesian':
        tpe_best = fmin(fn=objective, space=space, algo=tpe.suggest, trials=Trials(), max_evals=max_evals)
    elif kind == 'random':
        tpe_best = fmin(fn=objective, space=space, algo=rand.suggest, trials=Trials(), max_evals=max_evals)
    else:
        raise BaseError('First parameter "kind" must be either "bayesian" or "random"')

    return histories, hyperparameter_sets, scores

Then how I actually run the Bayesian optimization.

space = {'lr': hp.loguniform('lr', np.log(np.sqrt(10)*1e-4), np.log(1e-1)), 'momentum': 1 - hp.loguniform('momentum', np.log(np.sqrt(10)*1e-3), np.log(np.sqrt(10)*1e-1))}

histories, hyperparameter_sets, scores = selective_search(kind='bayesian', space=space, max_evals=100, batch_size=32)
like image 260
Alexander Soare Avatar asked Jan 17 '20 13:01

Alexander Soare


People also ask

Why Bayesian optimization could be done better grid search or random search?

Summary. Bayesian optimization methods are efficient because they select hyperparameters in an informed manner. By prioritizing hyperparameters that appear more promising from past results, Bayesian methods can find the best hyperparameters in lesser time (in fewer iterations) than both grid search and random search.

What does Bayesian optimization do?

Bayesian Optimization is an approach that uses Bayes Theorem to direct the search in order to find the minimum or maximum of an objective function. It is an approach that is most useful for objective functions that are complex, noisy, and/or expensive to evaluate.

Is Bayesian optimization faster than grid search?

The Bayesian optimization also performed 100 trials but was able to achieve the highest score after only 67 iterations, far less than the grid search's 680 iterations. Although it executed the same number of trials as the random search, it has a longer run time since it is an informed search method.

What is Bayesian optimization for Hyperparameter tuning?

Bayesian optimization is a global optimization method for noisy black-box functions. Applied to hyperparameter optimization, Bayesian optimization builds a probabilistic model of the function mapping from hyperparameter values to the objective evaluated on a validation set.


1 Answers

This is an update on my progress, and somewhat answers my question. The headline is that I wasn't running enough iterations.

  1. Score over iterations and 2. Running best score over iterations

    • We do observe a trend towards better accuracy as seen by the fitted line. This is probably less because the minimum is improving, and more because the algorithm spends less time evaluating hyperparameters which are clearly not candidates for optimal performance.
  2. Learning rate over iterations and 4. the corresponding box and whisker plots

    • A strange thing we see here is convergence and divergence of the trialled hyperparameters. My guess is because the noise from the statistical deviations are not allowing the algorithm to map out the terrain reliably. It can't settle on a minimum because each time it tests a certain set of hyperparameters it gets a slightly different answer.
    • Nevertheless we do see hints that the algorithm restricts its search space to a narrower neighborhood than the full range, as expected.
  3. Momentum over iterations and 6. the corresponding box and whisker plots

    • Here we make similar observations to the learning rates. What's interesting, is the way the average tends to converge and diverge with the learning rate. Remember I mentioned earlier that as momentum increases, we need to decrease learning rate to maintain good model training performance. So there is somehow a coupling between momentum and learning rate if we try to maintain good performance. This is what the optimisation algorithm is demonstrating for us here!

enter image description here enter image description here enter image description here

like image 137
Alexander Soare Avatar answered Sep 23 '22 02:09

Alexander Soare