I am trying to get a simple CNN to train for the past 3 days.
First, I have setup an input pipeline/queue configuration that reads images from a directory tree and prepares batches.
I got the code for this at this link. So, I now have train_image_batch and train_label_batch that I need to feed to my CNN.
train_image_batch, train_label_batch = tf.train.batch(
[train_image, train_label],
batch_size=BATCH_SIZE
# ,num_threads=1
)
And I am unable to figure out how. I am using the code for CNN given at this link.
# Input Layer
input_layer = tf.reshape(train_image_batch, [-1, IMAGE_HEIGHT, IMAGE_WIDTH, NUM_CHANNELS])
# Convolutional Layer #1
conv1 = new_conv_layer(input_layer, NUM_CHANNELS, 5, 32, 2)
# Pooling Layer #1
pool1 = new_pooling_layer(conv1, 2, 2)
The input_layer on printing shows this
Tensor("Reshape:0", shape=(5, 120, 120, 3), dtype=uint8)
The next line crashes with TypeError; conv1 = new_conv_layer(...). The body of new_conv_layer function is given below
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
stride):
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = tf.Variable(tf.truncated_normal(shape, stddev=0.05))
# Create new biases, one for each filter.
biases = tf.Variable(tf.constant(0.05, shape=[num_filters]))
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, stride, stride, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
Precisely it crashes at tf.nn.conv2d with this error
TypeError: Value passed to parameter 'input' has DataType uint8 not in list of allowed values: float16, float32
The image from your input pipeline is of type 'uint8', you need to type cast it to 'float32', You can do this after the image jpeg decoder:
image = tf.image.decode_jpeg(...
image = tf.cast(image, tf.float32)
You need to cast your image from int
to float
, You can simply do so for your input images.
image = image.astype('float')
It works fine with me.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With