Shouldn't a 1D CNN with stride = 1 and 1 filter have output length equal to input length without the need for padding?
I thought this was the case, but created a Keras model with these specifications that says the output shape is (17902,1) when the input shape is (17910,1). I'm wondering why the dimension has been reduced, since the stride is 1 and it's a 1D convolution.
model = keras.Sequential([
layers.Conv1D(filters=1,kernel_size=9,strides=1,activation=tf.nn.relu,input_shape=X_train[0].shape)
])
I expect that the output shape of this model should be (17910,1), but clearly I'm missing a source of reduction in dimension in this conv. layer.
The length of your output vector is dependent on the length of the input and your kernel size. Since you have a kernel size of 9 you'll get 17902 convolutions with your input and thus an output of shape (17902,1) (without padding).
For better understanding:
Without padding:

With padding:

Whether you should use padding or not is more a question of accuracy. As Ian Goodfellow, Yoshua Bengio and Aaaron Courville in their Deep Learning book found, the optimal padding (at least for 2D images) lies somewhere between "none" and "same"
So my suggestion would be, to try two different CNNs, which have the same architecture except the padding and take the one which has the better accuracy.
(Source: https://www.slideshare.net/xavigiro/recurrent-neural-networks-2-d2l3-deep-learning-for-speech-and-language-upc-2017)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With