Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Basic 1d convolution in tensorflow

Tags:

OK, I'd like to do a 1-dimensional convolution of time series data in Tensorflow. This is apparently supported using tf.nn.conv2d, according to these tickets, and the manual. the only requirement is to set strides=[1,1,1,1]. Sounds simple!

However, I cannot work out how to do this in even a very minimal test case. What am I doing wrong?

Let's set this up.

import tensorflow as tf import numpy as np print(tf.__version__) >>> 0.9.0 

OK, now generate a basic convolution test on two small arrays. I will make it easy by using a batch size of 1, and since time series are 1-dimensional, I will have an "image height" of 1. And since it's a univariate time series, clearly the number of "channels" is also 1, so this will be simple, right?

g = tf.Graph() with g.as_default():     # data shape is "[batch, in_height, in_width, in_channels]",     x = tf.Variable(np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(1,1,-1,1), name="x")     # filter shape is "[filter_height, filter_width, in_channels, out_channels]"     phi = tf.Variable(np.array([0.0, 0.5, 1.0]).reshape(1,-1,1,1), name="phi")     conv = tf.nn.conv2d(         phi,         x,         strides=[1, 1, 1, 1],         padding="SAME",         name="conv") 

BOOM. Error.

ValueError: Dimensions 1 and 5 are not compatible 

OK, For a start, I don't understand how this should happen with any dimension, since I've specified that I'm padding the arguments in the convolution OP.

but fine, maybe there are limits to that. I must have got the documentation confused and set up this convolution on the wrong axes of the tensor. I'll try all possible permutations:

for i in range(4):     for j in range(4):         shape1 = [1,1,1,1]         shape1[i] = -1         shape2 = [1,1,1,1]         shape2[j] = -1         x_array = np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(*shape1)         phi_array = np.array([0.0, 0.5, 1.0]).reshape(*shape2)         try:             g = tf.Graph()             with g.as_default():                 x = tf.Variable(x_array, name="x")                 phi = tf.Variable(phi_array, name="phi")                 conv = tf.nn.conv2d(                     x,                     phi,                     strides=[1, 1, 1, 1],                     padding="SAME",                     name="conv")                 init_op = tf.initialize_all_variables()             sess = tf.Session(graph=g)             sess.run(init_op)             print("SUCCEEDED!", x_array.shape, phi_array.shape, conv.eval(session=sess))             sess.close()         except Exception as e:             print("FAILED!", x_array.shape, phi_array.shape, type(e), e.args or e._message) 

Result:

FAILED! (5, 1, 1, 1) (3, 1, 1, 1) <class 'ValueError'> ('Filter must not be larger than the input: Filter: (3, 1) Input: (1, 1)',) FAILED! (5, 1, 1, 1) (1, 3, 1, 1) <class 'ValueError'> ('Filter must not be larger than the input: Filter: (1, 3) Input: (1, 1)',) FAILED! (5, 1, 1, 1) (1, 1, 3, 1) <class 'ValueError'> ('Dimensions 1 and 3 are not compatible',) FAILED! (5, 1, 1, 1) (1, 1, 1, 3) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs      [[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]] FAILED! (1, 5, 1, 1) (3, 1, 1, 1) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs      [[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]] FAILED! (1, 5, 1, 1) (1, 3, 1, 1) <class 'ValueError'> ('Filter must not be larger than the input: Filter: (1, 3) Input: (5, 1)',) FAILED! (1, 5, 1, 1) (1, 1, 3, 1) <class 'ValueError'> ('Dimensions 1 and 3 are not compatible',) FAILED! (1, 5, 1, 1) (1, 1, 1, 3) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs      [[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]] FAILED! (1, 1, 5, 1) (3, 1, 1, 1) <class 'ValueError'> ('Filter must not be larger than the input: Filter: (3, 1) Input: (1, 5)',) FAILED! (1, 1, 5, 1) (1, 3, 1, 1) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs      [[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]] FAILED! (1, 1, 5, 1) (1, 1, 3, 1) <class 'ValueError'> ('Dimensions 1 and 3 are not compatible',) FAILED! (1, 1, 5, 1) (1, 1, 1, 3) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs      [[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]] FAILED! (1, 1, 1, 5) (3, 1, 1, 1) <class 'ValueError'> ('Dimensions 5 and 1 are not compatible',) FAILED! (1, 1, 1, 5) (1, 3, 1, 1) <class 'ValueError'> ('Dimensions 5 and 1 are not compatible',) FAILED! (1, 1, 1, 5) (1, 1, 3, 1) <class 'ValueError'> ('Dimensions 5 and 3 are not compatible',) FAILED! (1, 1, 1, 5) (1, 1, 1, 3) <class 'ValueError'> ('Dimensions 5 and 1 are not compatible',) 

Hmm. OK, it looks like there are two problems now. Firstly, the ValueError is about applying the filter along the wrong axis, I guess, although there are two forms.

But then the axes along which I can apply the filter are confusing too - notice that it actually constructs the graph with input shape (5, 1, 1, 1) and filter shape (1, 1, 1, 3). AFAICT from the documentation, this should be a filter that looks at on example from the batch, one "pixel" and one "channel" and outputs 3 "channels". Why does that one work, then, when others do not?

Anyway, sometimes it does not fail while constructing the graph. Sometime it constructs the graph; then we get the tensorflow.python.framework.errors.InvalidArgumentError. From some confusing github tickets I gather this is probably due to the fact that I'm running on CPU instead of GPU, or vice versa the fact that the convolution Op is only defined for 32 bit floats, not 64 bit floats. If anyone could throw some light on which axes I should be aligning what on, in order to convolve a time series with a kernel, I'd be very grateful.

like image 928
dan mackinlay Avatar asked Jun 30 '16 05:06

dan mackinlay


People also ask

What does a 1D convolutional layer do?

1D CNN can perform activity recognition task from accelerometer data, such as if the person is standing, walking, jumping etc. This data has 2 dimensions. The first dimension is time-steps and other is the values of the acceleration in 3 axes.

What is a convolution TensorFlow?

In this post I attempt to summarize the course on Convolutional Neural Networks in TensorFlow by Deeplearning.ai. Convolutional Neural Network or ConvNets is a special type of neural network that is used to analyze and process images. It derives it's name from the 'Convolutional' layer that it employs as a filter.

What is the difference between Conv1D and Conv2D?

With Conv1D, one dimension only is used, so the convolution operates on the first axis (size 68 ). With Conv2D, two dimensions are used, so the convolution operates on the two axis defining the data (size (68,2) )


2 Answers

I am sorry to say that, but your first code was almost right. You just inverted x and phi in tf.nn.conv2d:

g = tf.Graph() with g.as_default():     # data shape is "[batch, in_height, in_width, in_channels]",     x = tf.Variable(np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(1, 1, 5, 1), name="x")     # filter shape is "[filter_height, filter_width, in_channels, out_channels]"     phi = tf.Variable(np.array([0.0, 0.5, 1.0]).reshape(1, 3, 1, 1), name="phi")     conv = tf.nn.conv2d(         x,         phi,         strides=[1, 1, 1, 1],         padding="SAME",         name="conv") 

Update: TensorFlow now supports 1D convolution since version r0.11, using tf.nn.conv1d. I previously made a guide to use them in the stackoverflow documentation (now extinct) that I'm pasting here:


Guide to 1D convolution

Consider a basic example with an input of length 10, and dimension 16. The batch size is 32. We therefore have a placeholder with input shape [batch_size, 10, 16].

batch_size = 32 x = tf.placeholder(tf.float32, [batch_size, 10, 16]) 

We then create a filter with width 3, and we take 16 channels as input, and output also 16 channels.

filter = tf.zeros([3, 16, 16])  # these should be real values, not 0 

Finally we apply tf.nn.conv1d with a stride and a padding: - stride: integer s - padding: this works like in 2D, you can choose between SAME and VALID. SAME will output the same input length, while VALID will not add zero padding.

For our example we take a stride of 2, and a valid padding.

output = tf.nn.conv1d(x, filter, stride=2, padding="VALID") 

The output shape should be [batch_size, 4, 16].
With padding="SAME", we would have had an output shape of [batch_size, 5, 16].

like image 138
Olivier Moindrot Avatar answered Sep 19 '22 12:09

Olivier Moindrot


In the new versions of TF (starting from 0.11) you have conv1d, so there is no need to use 2d convolution to do 1d convolution. Here is a simple example of how to use conv1d:

import tensorflow as tf i = tf.constant([1, 0, 2, 3, 0, 1, 1], dtype=tf.float32, name='i') k = tf.constant([2, 1, 3], dtype=tf.float32, name='k')  data   = tf.reshape(i, [1, int(i.shape[0]), 1], name='data') kernel = tf.reshape(k, [int(k.shape[0]), 1, 1], name='kernel')  res = tf.squeeze(tf.nn.conv1d(data, kernel, stride=1, padding='VALID')) with tf.Session() as sess:     print sess.run(res) 

To understand how conv1d is calculates, take a look at various examples

like image 43
Salvador Dali Avatar answered Sep 18 '22 12:09

Salvador Dali