Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to boost a Keras based neural network using AdaBoost?

Assuming I fit the following neural network for a binary classification problem:

model = Sequential()
model.add(Dense(21, input_dim=19, init='uniform', activation='relu'))
model.add(Dense(80, init='uniform', activation='relu'))
model.add(Dense(80, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(x2, training_target, nb_epoch=10, batch_size=32, verbose=0,validation_split=0.1, shuffle=True,callbacks=[hist])

How would I boost the neural network using AdaBoost? Does keras have any commands for this?

like image 623
ishido Avatar asked Aug 21 '16 11:08

ishido


People also ask

How can I improve my AdaBoost performance?

Explore the number of trees An important hyperparameter for Adaboost is n_estimator. Often by changing the number of base models or weak learners we can adjust the accuracy of the model. The number of trees added to the model must be high for the model to work well, often hundreds, if not thousands.

How does AdaBoost improve classifier accuracy?

It combines multiple classifiers to increase the accuracy of classifiers. AdaBoost is an iterative ensemble method. AdaBoost classifier builds a strong classifier by combining multiple poorly performing classifiers so that you will get high accuracy strong classifier.

When should we use AdaBoost?

AdaBoost can be used to boost the performance of any machine learning algorithm. It is best used with weak learners. These are models that achieve accuracy just above random chance on a classification problem. The most suited and therefore most common algorithm used with AdaBoost are decision trees with one level.


2 Answers

This can be done as follows: First create a model (for reproducibility make it as a function):

def simple_model():                                           
    # create model
    model = Sequential()
    model.add(Dense(25, input_dim=x_train.shape[1], kernel_initializer='normal', activation='relu'))
    model.add(Dropout(0.2, input_shape=(x_train.shape[1],)))
    model.add(Dense(10, kernel_initializer='normal', activation='relu'))
    model.add(Dense(1, kernel_initializer='normal'))
    # Compile model
    model.compile(loss='mean_squared_error', optimizer='adam')
    return model

Then put it inside the sklearn wrapper:

ann_estimator = KerasRegressor(build_fn= simple_model, epochs=100, batch_size=10, verbose=0)

Then and finally boost it:

boosted_ann = AdaBoostRegressor(base_estimator= ann_estimator)
boosted_ann.fit(rescaledX, y_train.values.ravel())# scale your training data 
boosted_ann.predict(rescaledX_Test)
like image 154
owise Avatar answered Oct 05 '22 12:10

owise


Keras itself does not implement adaboost. However, Keras models are compatible with scikit-learn, so you probably can use AdaBoostClassifier from there: link. Use your model as the base_estimator after you compile it, and fit the AdaBoostClassifier instance instead of model.

This way, however, you will not be able to use the arguments you pass to fit, such as number of epochs or batch_size, so the defaults will be used. If the defaults are not good enough, you might need to build your own class that implements the scikit-learn interface on top of your model and passes proper arguments to fit.

like image 28
Ishamael Avatar answered Oct 05 '22 11:10

Ishamael