Assuming I fit the following neural network for a binary classification problem:
model = Sequential()
model.add(Dense(21, input_dim=19, init='uniform', activation='relu'))
model.add(Dense(80, init='uniform', activation='relu'))
model.add(Dense(80, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# Fit the model
model.fit(x2, training_target, nb_epoch=10, batch_size=32, verbose=0,validation_split=0.1, shuffle=True,callbacks=[hist])
How would I boost the neural network using AdaBoost? Does keras have any commands for this?
Explore the number of trees An important hyperparameter for Adaboost is n_estimator. Often by changing the number of base models or weak learners we can adjust the accuracy of the model. The number of trees added to the model must be high for the model to work well, often hundreds, if not thousands.
It combines multiple classifiers to increase the accuracy of classifiers. AdaBoost is an iterative ensemble method. AdaBoost classifier builds a strong classifier by combining multiple poorly performing classifiers so that you will get high accuracy strong classifier.
AdaBoost can be used to boost the performance of any machine learning algorithm. It is best used with weak learners. These are models that achieve accuracy just above random chance on a classification problem. The most suited and therefore most common algorithm used with AdaBoost are decision trees with one level.
This can be done as follows: First create a model (for reproducibility make it as a function):
def simple_model():
# create model
model = Sequential()
model.add(Dense(25, input_dim=x_train.shape[1], kernel_initializer='normal', activation='relu'))
model.add(Dropout(0.2, input_shape=(x_train.shape[1],)))
model.add(Dense(10, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
Then put it inside the sklearn wrapper:
ann_estimator = KerasRegressor(build_fn= simple_model, epochs=100, batch_size=10, verbose=0)
Then and finally boost it:
boosted_ann = AdaBoostRegressor(base_estimator= ann_estimator)
boosted_ann.fit(rescaledX, y_train.values.ravel())# scale your training data
boosted_ann.predict(rescaledX_Test)
Keras itself does not implement adaboost. However, Keras models are compatible with scikit-learn, so you probably can use AdaBoostClassifier
from there: link. Use your model
as the base_estimator
after you compile it, and fit
the AdaBoostClassifier
instance instead of model
.
This way, however, you will not be able to use the arguments you pass to fit
, such as number of epochs or batch_size, so the defaults will be used. If the defaults are not good enough, you might need to build your own class that implements the scikit-learn interface on top of your model and passes proper arguments to fit
.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With