I'm using Keras with Tensorflow as backend , here is my code:
import numpy as np
np.random.seed(1373)
import tensorflow as tf
tf.python.control_flow_ops = tf
import os
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.utils import np_utils
batch_size = 128
nb_classes = 10
nb_epoch = 12
img_rows, img_cols = 28, 28
nb_filters = 32
nb_pool = 2
nb_conv = 3
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print(X_train.shape[0])
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
model = Sequential()
model.add(Convolution2D(nb_filters, nb_conv, nb_conv,
border_mode='valid',
input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, nb_conv, nb_conv))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=["accuracy"])
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
and Trackback error:
Using TensorFlow backend.
60000
('X_train shape:', (60000, 1, 28, 28))
(60000, 'train samples')
(10000, 'test samples')
Traceback (most recent call last):
File "mnist.py", line 154, in <module>
input_shape=(1, img_rows, img_cols)))
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 276, in add
layer.create_input_layer(batch_input_shape, input_dtype)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 370, in create_input_layer
self(x)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 514, in __call__
self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 572, in add_inbound_node
Node.create_node(self, inbound_layers, node_indices, tensor_indices)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 149, in create_node
output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
File "/usr/local/lib/python2.7/dist-packages/keras/layers/convolutional.py", line 466, in call
filter_shape=self.W_shape)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 1579, in conv2d
x = tf.nn.conv2d(x, kernel, strides, padding=padding)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 396, in conv2d
data_format=data_format, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2242, in create_op
set_shapes_for_outputs(ret)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1617, in set_shapes_for_outputs
shapes = shape_func(op)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1568, in call_with_requiring
return call_cpp_shape_fn(op, require_shape_fn=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn
debug_python_shape_fn, require_shape_fn)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 675, in _call_cpp_shape_fn_impl
raise ValueError(err.message)
ValueError: Negative dimension size caused by subtracting 3 from 1 for 'Conv2D' (op: 'Conv2D') with input shapes: [?,1,28,28], [3,3,28,32].
First I saw some answers that problem is with Tensorflow
version so I upgrade Tensorflow
to 0.12.0
, but still exist , is that problem with network or I missing something, what should input_shape
looks like?
Update
Here is ./keras/keras.json
:
{
"image_dim_ordering": "tf",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "tensorflow"
}
Your issue comes from the image_ordering_dim
in keras.json
.
From Keras Image Processing doc:
dim_ordering: One of {"th", "tf"}. "tf" mode means that the images should have shape (samples, height, width, channels), "th" mode means that the images should have shape (samples, channels, height, width). It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".
Keras maps the convolution operation to the chosen backend (theano or tensorflow). However, both backends have made different choices for the ordering of the dimensions. If your image batch is of N images of HxW size with C channels, theano uses the NCHW ordering while tensorflow uses the NHWC ordering.
Keras allows you to choose which ordering you prefer and will do the conversion to map to the backends behind. But if you choose image_ordering_dim="th"
it expects Theano-style ordering (NCHW, the one you have in your code) and if image_ordering_dim="tf"
it expects tensorflow-style ordering (NHWC).
Since your image_ordering_dim
is set to "tf"
, if you reshape your data to the tensorflow style it should work:
X_train = X_train.reshape(X_train.shape[0], img_cols, img_rows, 1)
X_test = X_test.reshape(X_test.shape[0], img_cols, img_rows, 1)
and
input_shape=(img_cols, img_rows, 1)
FWIW, I got this error repeatedly with some values of strides or kernel_size but not all, with the backend and image_ordering already set as tensorflow's, and they all disappeared when I added padding="same"
Just add this:
from keras import backend as K
K.set_image_dim_ordering('th')
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With