I'm training a textual sentiment classification model with multiple output layers in Keras's Functional API(using a TensorFlow backend). The model takes as input a Numpy array of hashed values produced by the Keras Preprocessing API's hashing_trick() function, and uses a list of Numpy arrays of binary one-hot labels as its targets, as per Keras specifications for training a model with multiple outputs(see fit()'s documentation here: https://keras.io/models/model/).
Here's the model, sans most of the preprocessing steps:
textual_features = hashing_utility(filtered_words) # Numpy array of hashed values(training data)
label_list = [] # Will eventually contain a list of Numpy arrays of binary one-hot labels
for index in range(one_hot_labels.shape[0]):
label_list.append(one_hot_labels[index])
weighted_loss_value = (1/(len(filtered_words))) # Equal weight on each of the output layers' losses
weighted_loss_values = []
for index in range (one_hot_labels.shape[0]):
weighted_loss_values.append(weighted_loss_value)
text_input = Input(shape = (1,))
intermediate_layer = Dense(64, activation = 'relu')(text_input)
hidden_bottleneck_layer = Dense(32, activation = 'relu')(intermediate_layer)
keras.regularizers.l2(0.1)
output_layers = []
for index in range(len(filtered_words)):
output_layers.append(Dense(2, activation = 'sigmoid')(hidden_bottleneck_layer))
model = Model(inputs = text_input, outputs = output_layers)
model.compile(optimizer = 'RMSprop', loss = 'binary_crossentropy', metrics = ['accuracy'], loss_weights = weighted_loss_values)
model.fit(textual_features, label_list, epochs = 50)
Here's the gist of the error trace training this model produces:
ValueError: Error when checking target: expected dense_3 to have shape (2,) but got array with shape (1,)
Your numpy arrays
(both for inputs and outputs) should contain a batch dimension. If your labels are currently of shape (2,)
, you can reshape them to include a batch dimension as follows:
label_array = label_array.reshape(1, -1)
I used
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
insted of
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
I changed the loss and it worked for me.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With