My model is defined as below:
def build(data):
model = Sequential()
model.add(Cropping2D(cropping=((79, 145), (50, 250)), input_shape=
(160,320,3)))
model.add(Lambda(lambda x: x/127.5 - 1.0))
model.add(Conv2D(24, (2, 2), padding='same'))
model.add(ELU())
model.add(Conv2D(36, (2, 2), padding='same'))
model.add(ELU())
model.add(Conv2D(48, (2, 2), padding='same'))
model.add(ELU())
# Add a flatten layer
model.add(Flatten())
model.summary()
model.add(Dense(100))
model.add(ELU())
model.add(Dense(50))
model.add(ELU())
model.add(Dense(10))
model.add(ELU())
model.add(Dense(1))
return model
Getting this error:
ValueError: The last dimension of the inputs to
Dense
should be defined. FoundNone
.
I ran model.summary()
and got the following output
Layer (type) Output Shape Param #
=================================================================
cropping2d_15 (Cropping2D) (None, 0, 20, 3) 0
_________________________________________________________________
lambda_23 (Lambda) (None, 0, 20, 3) 0
_________________________________________________________________
conv2d_47 (Conv2D) (None, 0, 20, 24) 312
_________________________________________________________________
elu_43 (ELU) (None, 0, 20, 24) 0
_________________________________________________________________
conv2d_48 (Conv2D) (None, 0, 20, 36) 3492
_________________________________________________________________
elu_44 (ELU) (None, 0, 20, 36) 0
_________________________________________________________________
conv2d_49 (Conv2D) (None, 0, 20, 48) 6960
_________________________________________________________________
elu_45 (ELU) (None, 0, 20, 48) 0
_________________________________________________________________
flatten_12 (Flatten) (None, None) 0
=================================================================
Total params: 10,764
Trainable params: 10,764
Non-trainable params: 0
I am fairly new to python, any inputs will be appreciated.
You are cropping your input image too much. The cropping
argument is interpreted as follows:
If tuple of 2 tuples of 2 ints: interpreted as ((top_crop, bottom_crop), (left_crop, right_crop))
Consider the following example from the Keras docs:
# Crop the input 2D images or feature maps
model = Sequential()
model.add(Cropping2D(cropping=((2, 2), (4, 4)),
input_shape=(28, 28, 3)))
# now model.output_shape == (None, 24, 20, 3)
In your code, you are cropping 79 pixels from the top and 145 pixels from the bottom, whereas the height of your images is only 160 pixels. With less cropping, your code runs fine, eg:
model.add(Cropping2D(cropping=((10, 10), (10, 10)), input_shape=(160,320,3)))
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With