Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Keras ValueError: Dimensions must be equal issue

Even after applying the suggestions in answer and comments, it looks like the dimension mismatch issue persists. This is exact code and data file to replicate as well: https://drive.google.com/drive/folders/1q67s0VhB-O7J8OtIhU2jmj7Kc4LxL3sf?usp=sharing

How can this be corrected!? Latest code, model summary, functions used and error I get is below

type_ae=='dcor'
#Wrappers for keras
def custom_loss1(y_true,y_pred):
    dcor = -1*distance_correlation(y_true,encoded_layer)
    return dcor

def custom_loss2(y_true,y_pred):
    recon_loss = losses.categorical_crossentropy(y_true, y_pred)
    return recon_loss

input_layer =  Input(shape=(64,64,1))

encoded_layer = Conv2D(filters = 128, kernel_size = (5,5),padding = 'same',activation ='relu', 
                       input_shape = (64,64,1))(input_layer)
encoded_layer = MaxPool2D(pool_size=(2,2))(encoded_layer)
encoded_layer = Dropout(0.25)(encoded_layer)
encoded_layer = (Conv2D(filters = 64, kernel_size = (3,3),padding = 'same',activation ='relu'))(encoded_layer)
encoded_layer = (MaxPool2D(pool_size=(2,2)))(encoded_layer)
encoded_layer = (Dropout(0.25))(encoded_layer)

encoded_layer = (Conv2D(filters = 64, kernel_size = (3,3),padding = 'same',activation ='relu'))(encoded_layer)
encoded_layer = (MaxPool2D(pool_size=(2,2)))(encoded_layer)
encoded_layer = (Dropout(0.25))(encoded_layer)
encoded_layer = Conv2D(filters = 1, kernel_size = (3,3),padding = 'same',activation ='relu', 
                       input_shape = (64,64,1),strides=1)(encoded_layer)
encoded_layer = ZeroPadding2D(padding=(28, 28), data_format=None)(encoded_layer)

decoded_imag = Conv2D(8, (2, 2), activation='relu', padding='same')(encoded_layer)
decoded_imag = UpSampling2D((2, 2))(decoded_imag)
decoded_imag = Conv2D(8, (3, 3), activation='relu', padding='same')(decoded_imag)
decoded_imag = UpSampling2D((2, 2))(decoded_imag)
decoded_imag = Conv2D(16, (3, 3), activation='relu', padding='same')(decoded_imag)
decoded_imag = UpSampling2D((2, 2))(decoded_imag)
decoded_imag = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(decoded_imag)
flat_layer = Flatten()(decoded_imag)
dense_layer = Dense(256,activation = "relu")(flat_layer)          
dense_layer = Dense(64,activation = "relu")(dense_layer) 
dense_layer = Dense(32,activation = "relu")(dense_layer) 
output_layer = Dense(9, activation = "softmax")(dense_layer)
autoencoder = Model(input_layer, [encoded_layer,output_layer])
autoencoder.summary()
autoencoder.compile(optimizer='adadelta', loss=[custom_loss1,custom_loss2])
autoencoder.fit(x_train,[x_train, y_train],batch_size=32,epochs=3,shuffle=True,
                validation_data=(x_val, [x_val,y_val]))

The data is of dimensions:

x_train.shape:  (4000, 64, 64, 1)
x_val.shape:  (1000, 64, 64, 1)
y_train.shape:  (4000, 9)
y_val.shape:  (1000, 9)

losses look like:

def custom_loss1(y_true,y_pred):
    dcor = -1*distance_correlation(y_true,encoded_layer)
    return dcor

def custom_loss2(y_true,y_pred):
    recon_loss = losses.categorical_crossentropy(y_true, y_pred)
    return recon_loss

The correlation function is based on tensors as follows:

def distance_correlation(y_true,y_pred):
    pred_r = tf.reduce_sum(y_pred*y_pred,1)
    pred_r = tf.reshape(pred_r,[-1,1])
    pred_d = pred_r - 2*tf.matmul(y_pred,tf.transpose(y_pred))+tf.transpose(pred_r)
    true_r = tf.reduce_sum(y_true*y_true,1)
    true_r = tf.reshape(true_r,[-1,1])
    true_d = true_r - 2*tf.matmul(y_true,tf.transpose(y_true))+tf.transpose(true_r)
    concord = 1-tf.matmul(y_true,tf.transpose(y_true))
    #print(pred_d)
    #print(tf.reshape(tf.reduce_mean(pred_d,1),[-1,1]))
    #print(tf.reshape(tf.reduce_mean(pred_d,0),[1,-1]))
    #print(tf.reduce_mean(pred_d))
    tf.check_numerics(pred_d,'pred_d has NaN')
    tf.check_numerics(true_d,'true_d has NaN')
    A = pred_d - tf.reshape(tf.reduce_mean(pred_d,1),[-1,1]) - tf.reshape(tf.reduce_mean(pred_d,0),[1,-1]) + tf.reduce_mean(pred_d)
    B = true_d - tf.reshape(tf.reduce_mean(true_d,1),[-1,1]) - tf.reshape(tf.reduce_mean(true_d,0),[1,-1]) + tf.reduce_mean(true_d)
    #dcor = -tf.reduce_sum(concord*pred_d)/tf.reduce_sum((1-concord)*pred_d)
    dcor = -tf.log(tf.reduce_mean(A*B))+tf.log(tf.sqrt(tf.reduce_mean(A*A)*tf.reduce_mean(B*B)))#-tf.reduce_sum(concord*pred_d)/tf.reduce_sum((1-concord)*pred_d)
    #print(dcor.shape)
    #tf.Print(dcor,[dcor])
    #dcor = tf.tile([dcor],batch_size)
    return (dcor)

model summary looks like:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_5 (InputLayer)         (None, 64, 64, 1)         0         
_________________________________________________________________
conv2d_30 (Conv2D)           (None, 64, 64, 128)       3328      
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 32, 32, 128)       0         
_________________________________________________________________
dropout_13 (Dropout)         (None, 32, 32, 128)       0         
_________________________________________________________________
conv2d_31 (Conv2D)           (None, 32, 32, 64)        73792     
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 16, 16, 64)        0         
_________________________________________________________________
dropout_14 (Dropout)         (None, 16, 16, 64)        0         
_________________________________________________________________
conv2d_32 (Conv2D)           (None, 16, 16, 64)        36928     
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 8, 8, 64)          0         
_________________________________________________________________
dropout_15 (Dropout)         (None, 8, 8, 64)          0         
_________________________________________________________________
conv2d_33 (Conv2D)           (None, 8, 8, 1)           577       
_________________________________________________________________
zero_padding2d_5 (ZeroPaddin (None, 64, 64, 1)         0         
_________________________________________________________________
conv2d_34 (Conv2D)           (None, 64, 64, 8)         40        
_________________________________________________________________
up_sampling2d_10 (UpSampling (None, 128, 128, 8)       0         
_________________________________________________________________
conv2d_35 (Conv2D)           (None, 128, 128, 8)       584       
_________________________________________________________________
up_sampling2d_11 (UpSampling (None, 256, 256, 8)       0         
_________________________________________________________________
conv2d_36 (Conv2D)           (None, 256, 256, 16)      1168      
_________________________________________________________________
up_sampling2d_12 (UpSampling (None, 512, 512, 16)      0         
_________________________________________________________________
conv2d_37 (Conv2D)           (None, 512, 512, 1)       145       
_________________________________________________________________
flatten_4 (Flatten)          (None, 262144)            0         
_________________________________________________________________
dense_13 (Dense)             (None, 256)               67109120  
_________________________________________________________________
dense_14 (Dense)             (None, 64)                16448     
_________________________________________________________________
dense_15 (Dense)             (None, 32)                2080      
_________________________________________________________________
dense_16 (Dense)             (None, 9)                 297       
=================================================================
Total params: 67,244,507
Trainable params: 67,244,507
Non-trainable params: 0
_________________________________________________________________

This is the error:

InvalidArgumentError                      Traceback (most recent call last)
~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
   1658   try:
-> 1659     c_op = c_api.TF_FinishOperation(op_desc)
   1660   except errors.InvalidArgumentError as e:

InvalidArgumentError: Dimensions must be equal, but are 1 and 64 for 'loss_1/zero_padding2d_5_loss/MatMul' (op: 'BatchMatMul') with input shapes: [?,64,64,1], [1,64,64,?].

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
<ipython-input-11-0e924885fc6b> in <module>
     40 autoencoder = Model(input_layer, [encoded_layer,output_layer])
     41 autoencoder.summary()
---> 42 autoencoder.compile(optimizer='adadelta', loss=[custom_loss1,custom_loss2])
     43 autoencoder.fit(x_train,[x_train, y_train],batch_size=32,epochs=3,shuffle=True,
     44                 validation_data=(x_val, [x_val,y_val]))

~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, **kwargs)
    340                 with K.name_scope(self.output_names[i] + '_loss'):
    341                     output_loss = weighted_loss(y_true, y_pred,
--> 342                                                 sample_weight, mask)
    343                 if len(self.outputs) > 1:
    344                     self.metrics_tensors.append(output_loss)

~/anaconda3/lib/python3.6/site-packages/keras/engine/training_utils.py in weighted(y_true, y_pred, weights, mask)
    402         """
    403         # score_array has ndim >= 2
--> 404         score_array = fn(y_true, y_pred)
    405         if mask is not None:
    406             # Cast the mask to floatX to avoid float64 upcasting in Theano

<ipython-input-11-0e924885fc6b> in custom_loss1(y_true, y_pred)
      2 #Wrappers for keras
      3 def custom_loss1(y_true,y_pred):
----> 4         dcor = -1*distance_correlation(y_true,encoded_layer)
      5         return dcor
      6 

<ipython-input-6-f282528532cc> in distance_correlation(y_true, y_pred)
      2     pred_r = tf.reduce_sum(y_pred*y_pred,1)
      3     pred_r = tf.reshape(pred_r,[-1,1])
----> 4     pred_d = pred_r - 2*tf.matmul(y_pred,tf.transpose(y_pred))+tf.transpose(pred_r)
      5     true_r = tf.reduce_sum(y_true*y_true,1)
      6     true_r = tf.reshape(true_r,[-1,1])

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py in matmul(a, b, transpose_a, transpose_b, adjoint_a, adjoint_b, a_is_sparse, b_is_sparse, name)
   2415         adjoint_b = True
   2416       return gen_math_ops.batch_mat_mul(
-> 2417           a, b, adj_x=adjoint_a, adj_y=adjoint_b, name=name)
   2418 
   2419     # Neither matmul nor sparse_matmul support adjoint, so we conjugate

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py in batch_mat_mul(x, y, adj_x, adj_y, name)
   1421   adj_y = _execute.make_bool(adj_y, "adj_y")
   1422   _, _, _op = _op_def_lib._apply_op_helper(
-> 1423         "BatchMatMul", x=x, y=y, adj_x=adj_x, adj_y=adj_y, name=name)
   1424   _result = _op.outputs[:]
   1425   _inputs_flat = _op.inputs

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
    786         op = g.create_op(op_type_name, inputs, output_types, name=scope,
    787                          input_types=input_types, attrs=attr_protos,
--> 788                          op_def=op_def)
    789       return output_structure, op_def.is_stateful, op
    790 

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs)
    505                 'in a future version' if date is None else ('after %s' % date),
    506                 instructions)
--> 507       return func(*args, **kwargs)
    508 
    509     doc = _add_deprecated_arg_notice_to_docstring(

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in create_op(***failed resolving arguments***)
   3298           input_types=input_types,
   3299           original_op=self._default_original_op,
-> 3300           op_def=op_def)
   3301       self._create_op_helper(ret, compute_device=compute_device)
   3302     return ret

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in __init__(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def)
   1821           op_def, inputs, node_def.attr)
   1822       self._c_op = _create_c_op(self._graph, node_def, grouped_inputs,
-> 1823                                 control_input_ops)
   1824 
   1825     # Initialize self._outputs.

~/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
   1660   except errors.InvalidArgumentError as e:
   1661     # Convert to ValueError for backwards compatibility.
-> 1662     raise ValueError(str(e))
   1663 
   1664   return c_op

ValueError: Dimensions must be equal, but are 1 and 64 for 'loss_1/zero_padding2d_5_loss/MatMul' (op: 'BatchMatMul') with input shapes: [?,64,64,1], [1,64,64,?].
like image 456
hearse Avatar asked May 25 '19 06:05

hearse


2 Answers

You are having two loss functions and so you have to pass two y (ground truths) for evaluating the loss with respect to the predictions.

Your first prediction is the output of layer encoded_layer which has a size of (None, 8, 8, 128) as observed from the model.summary for conv2d_59 (Conv2D)

But what you are passing in the fit for y is [x_train, y_train]. The loss_1 is expecting input of size (None, 8, 8, 128) but you are passing x_train which has a different size.

If you want the loss_1 to find the correlation of input image with the encoded image then stack the convolutions such that the output of the convolutions will result in the shape which is the same as your x_train image shape. Use model.summary to see the output shape of convolutions.

No use the padding, strides and kernel size of the convolution layer to get the desired output size of convolutions. use formula W2=(W1−F+2P)/S+1 and H2=(H1−F+2P)/S+1 to find the output width and height of convolutions. Check this reference


There are two major issues with your approach.

  1. Your loss function is checking the correlation between the encoded image and the actual image. The correct way to do it is to decode the image back from the encoded image and then check the correlation between the decoded image and the actual image (in lines of Autoencoder)
  2. Your loss 1 is using numpy arrays. For a loss function to be part of a computation graph it should use tensor operations, not numy operations.

Below is the working code. However, for loss 1 I am using l2 norm of the two images. If you want to use correlation then you have to somehow convert it into tensor operations (which is a different issue from this question)

def image_loss(y_true,y_pred):
    return tf.norm(y_true - y_pred)

def label_loss(y_true,y_pred):
    return categorical_crossentropy(y_true, y_pred)

input_img = Input(shape=(64, 64, 1))  

enocded_imag = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img)
enocded_imag = MaxPooling2D((2, 2), padding='same')(enocded_imag)
enocded_imag = Conv2D(8, (3, 3), activation='relu', padding='same')(enocded_imag)
enocded_imag = MaxPooling2D((2, 2), padding='same')(enocded_imag)
enocded_imag = Conv2D(8, (3, 3), activation='relu', padding='same')(enocded_imag)
enocded_imag = MaxPooling2D((2, 2), padding='same')(enocded_imag)

decoded_imag = Conv2D(8, (2, 2), activation='relu', padding='same')(enocded_imag)
decoded_imag = UpSampling2D((2, 2))(decoded_imag)
decoded_imag = Conv2D(8, (3, 3), activation='relu', padding='same')(decoded_imag)
decoded_imag = UpSampling2D((2, 2))(decoded_imag)
decoded_imag = Conv2D(16, (3, 3), activation='relu', padding='same')(decoded_imag)
decoded_imag = UpSampling2D((2, 2))(decoded_imag)
decoded_imag = Conv2D(1, (3, 3), activation='sigmoid', padding='same')(decoded_imag)

flat_layer = Flatten()(enocded_imag) 
dense_layer = Dense(32,activation = "relu")(flat_layer) 
output_layer = Dense(9, activation = "softmax")(dense_layer)

model = Model(input_img, [decoded_imag, output_layer])

model.compile(optimizer='adadelta', loss=[image_loss, label_loss])
images = np.random.randn(10,64,64,1)
model.fit(images, [images, np.random.randn(10,9)])

The loss function distance_correlation you have coded assumes that each row in y_true and y_pred represent an image. When you use Dense layers it will work because Dense layer outputs a batch of (row) vectors, where each vector represents an individual image. However, 2D convolutions operate on a batch of 2d tensors with multiple channels ( you have only 1 channel). So to use the distance_correlation loss function you have to reshape your tensor such that each row corresponds to an image. Add below two lines to reshape your tensors.

def distance_correlation(y_true,y_pred):
    y_true = tf.reshape(tf.squeeze(y_true), [-1,64*64])
    y_pred = tf.reshape(tf.squeeze(y_pred), [-1,64*64])
    .... REST OF THE CODE ....
like image 76
mujjiga Avatar answered Oct 30 '22 05:10

mujjiga


The intention is to use the original image in custom_loss1 and the scalar label values in custom_loss2. I think the working code by @mujjiga in his answer is almost correct. I suggest one slight modification.

In model.compile() pass the input tensor in the loss which needs it. Keep the other one same. model.fit() just passes the labels.

model.compile(optimizer='adadelta', loss=[custom_loss1(input_layer), custom_loss2]) 
mode.fit(x_train, y_train)

Inside the custom loss functions:

def custom_loss1(input):
    def loss1(y_true, y_pred):
        return tf.norm(input - y_pred) # use your custom loss 1
    return loss1

def custom_loss2(y_true, y_pred):
    return categorical_crossentropy(y_true, y_pred) # use your custom loss 2

Try this with simple in-built Keras loss functions first. If that works well, look into your custom loss functions.

like image 45
Anakin Avatar answered Oct 30 '22 03:10

Anakin