Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Causal padding in keras

Can someone explain the intuition behind 'causal' padding in Keras. Is there any particular application where this can be used?

The keras manual says this type of padding results in dilated convolution. What exactly it means by 'dilated' convolution?

like image 211
AKSHAYAA VAIDYANATHAN Avatar asked Sep 30 '18 14:09

AKSHAYAA VAIDYANATHAN


People also ask

What is causal padding?

Causal Padding This is a special type of padding and basically works with the one-dimensional convolutional layers. We can use them majorly in time series analysis. Since a time series is sequential data it helps in adding zeros at the start of the data.

What does padding do in keras?

Padding is a parameter that is used to control the number of features at the output with respect to input featues.

What kind of padding methods are available in keras?

Keras supports these types of padding: Valid padding, a.k.a. no padding; Same padding, a.k.a. zero padding; Causal padding.

What is padding same in keras?

"same" results in padding with zeros evenly to the left/right or up/down of the input. When padding="same" and strides=1 , the output has the same size as the input.


2 Answers

This is a great concise explanation about what is "causal" padding:

One thing that Conv1D does allow us to specify is padding="causal". This simply pads the layer's input with zeros in the front so that we can also predict the values of early time steps in the frame:

enter image description here

Dilation just means skipping nodes. Unlike strides which tells you where to apply the kernel next, dilation tells you how to spread your kernel. In a sense, it is equivalent to a stride in the previous layer.

enter image description here

In the image above, if the lower layer had a stride of 2, we would skip (2,3,4,5) and this would have given us the same results.

Credit: Kilian Batzner, Convolutions in Autoregressive Neural Networks

like image 131
Maverick Meerkat Avatar answered Oct 22 '22 08:10

Maverick Meerkat


That is convolution type, output at time t only depends on the previous time steps( Less than t). We won't consider the future time steps while getting Conv output. Please check this Wavenet paper gif enter image description here

like image 2
Uday Avatar answered Oct 22 '22 09:10

Uday