I am trying to perform model selection on a KerasClassifier, for several sliding windows of samples. However, each sliding window has a different input dimension (as a result of feature selection).
The function I have written works for regular scikit-learn classifiers. i.e. it returns a dictionary containing optimal RF models (using random grid search):
# return a dictionary with optimal models for each sliding window
rf_optimal_models = model_selection(RandomForestClassifier(),
param_distributions = random_grid_rf, n_iter = 10)
However, I am unsure how to set up the KerasClassifier in such a way that it will change the input_dim argument according to the dimensions of the sliding window being passed to it.
The following code sets up the keras scikit-learn wrapper.
def create_model(optimizer='adam', kernel_initializer='normal', dropout_rate=0.0):
with tf.device("/device:GPU:0"):
# create model
model = Sequential()
model.add(Dense(20, input_dim=X_train.shape[1], activation='relu', kernel_initializer=kernel_initializer))
model.add(Dropout(dropout_rate))
model.add(Dense(20, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
... and the call to my model_selection() function.
mlp_optimal_models = model_selection(model = KerasClassifier(build_fn=create_model, verbose=0,), param_distributions = random_grid_mlp, n_iter = 10)
The input_dim argument is static and will throw an error when it receives dimensions of say 49 (the input dim of the next sliding window), but expects 42.
ValueError: Error when checking input: expected dense_1_input to have shape
(42,) but got array with shape (49,)
The code below is a simplified version of my model_selection() function:
def model_selection(model, param_distributions, n_iter = 100):
"""
This function performs model selection using random grid search *without cross validation*.
Inputs:
model: enter model such as RandomForestClassifier() (which is default)
param_distributions: pre-defined grid to search over, specific to the input 'model'
n_iter: Number of parameter settings that are sampled. n_iter trades off runtime vs quality of the solution.
"""
# dictionary to hold optimal models for each sliding window
optimal_models = {}
# 'sets' is a dictionary containing sliding window dataframes e.g. 'X_train_0', 'y_train_0', 'X_test_0', 'y_test_0', 'X_train_1', 'y_train_1', 'X_test_1', 'y_test_1'
for i in np.arange(0, len(sets), 4): # for each sliding window
# assign the train and validation sets for the given sliding window
X_train = list(sets_for_model_selection.values())[i] # THESE HAVE DIFFERENT DIMS FROM WINDOW TO WINDOW
X_val = list(sets_for_model_selection.values())[i+1] # THESE HAVE DIFFERENT DIMS FROM WINDOW TO WINDOW
y_train = list(sets_for_model_selection.values())[i+2]
y_val = list(sets_for_model_selection.values())[i+3]
# set up the grid search
mdl_opt = RandomizedSearchCV(estimator = model, param_distributions = param_distributions,
n_iter = n_iter, cv = ps, verbose=2)
# Fit the random search model: parameter combinations will be trained, then tested on the validation set
mdl_opt.fit(np.concatenate((X_train, X_val), axis = 0),
np.concatenate((y_train.values.ravel(), y_val.values.ravel()), axis = 0))
mdl = {'optimal_model_sw'+str(i) : mdl_opt.best_estimator_}
# update the 'optimal models' dictionary
optimal_models.update(mdl)
return optimal_models
The solution involved a slight edit to the KerasClassifier wrapper and an editing my function model_selection()
.
First I changed the input_dim
to 'None':
def create_model(optimizer='adam', kernel_initializer='normal', dropout_rate=0.0, input_dim=None):
with tf.device("/device:GPU:0"):
# create model
model = Sequential()
model.add(Dense(20, input_dim=None, activation='relu', kernel_initializer=kernel_initializer))
model.add(Dropout(dropout_rate))
model.add(Dense(20, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
Then within the model selection function I added an extra argument 'mlp' to assert whether the model in question was a neural network.
If True, the KerasClassifier
model is created within the model_selection()
function where it has access to the number of dimensions for the sliding window in question. These are used as inputs to the input_dim
keyarg in the KerasClassifier
constructor (as in the link Vivek Kumar pointed out):
def model_selection(model, param_distributions, n_iter = 100, mlp=None):
"""
This function performs model selection using random grid search *without cross validation*.
Inputs:
model: enter model such as RandomForestClassifier() (which is default)
param_distributions: pre-defined grid to search over, specific to the input 'model'
n_iter: Number of parameter settings that are sampled. n_iter trades off runtime vs quality of the solution.
"""
# dictionary to hold optimal models for each sliding window
optimal_models = {}
# 'sets' is a dictionary containing sliding window dataframes e.g. 'X_train_0', 'y_train_0', 'X_test_0', 'y_test_0', 'X_train_1', 'y_train_1', 'X_test_1', 'y_test_1'
for i in np.arange(0, len(sets), 4): # for each sliding window
# assign the train and validation sets for the given sliding window
X_train = list(sets_for_model_selection.values())[i] # THESE HAVE DIFFERENT DIMS FROM WINDOW TO WINDOW
X_val = list(sets_for_model_selection.values())[i+1] # THESE HAVE DIFFERENT DIMS FROM WINDOW TO WINDOW
y_train = list(sets_for_model_selection.values())[i+2]
y_val = list(sets_for_model_selection.values())[i+3]
if mlp:
input_dims = list(sets_for_model_selection.values())[i].shape[1]
model = KerasClassifier(build_fn=create_model, input_dim=input_dims, verbose=0)
# set up the grid search
mdl_opt = RandomizedSearchCV(estimator = model, param_distributions = param_distributions,
n_iter = n_iter, cv = ps, verbose=2)
# Fit the random search model: parameter combinations will be trained, then tested on the validation set
mdl_opt.fit(np.concatenate((X_train, X_val), axis = 0),
np.concatenate((y_train.values.ravel(), y_val.values.ravel()), axis = 0))
mdl = {'optimal_model_sw'+str(i) : mdl_opt.best_estimator_}
# update the 'optimal models' dictionary
optimal_models.update(mdl)
return optimal_models
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With