I was hoping someone could explain the difference (if any) between the Input Layer in Keras and Placeholders within Tensorflow?
The more I investigate, the more the two appear similar, but I am not convinced 100% either way thus far.
Here is what I have observed in favor of the claim that Input Layers and tf Placeholders are the same:
1) The tensor returned from keras.Input() can be used like a placeholder in the feed_dict of tf.Session's run method. Here is part of a simple example using Keras, which adds two tensors (a and b) and concatenates the result with a third tensor (c):
model = create_graph()
con_cat = model.output[0]
ab_add = model.output[1]
# These values are used equivalently to tf.Placeholder() below
mdl_in_a = model.input[0]
mdl_in_b = model.input[1]
mdl_in_c = model.input[2]
sess = k.backend.get_session()
a_in = rand_array() # 2x2 numpy arrays
b_in = rand_array()
c_in = rand_array()
a_in = np.reshape( a_in, (1,2,2))
b_in = np.reshape( b_in, (1,2,2))
c_in = np.reshape( c_in, (1,2,2))
val_cat, val_add = sess.run([con_cat, ab_add],
feed_dict={ mdl_in_a: a_in, mdl_in_b: b_in, mdl_in_c: c_in})
2) The docs from the Tensorflow Contrib regarding the Keras Input Layer mention Placeholders in its argument description:
"sparse: A boolean specifying whether the placeholder to be created is sparse"
Here is what I have observed in favor of the claim that Input Layers and tf Placeholders are NOT the same:
1) I have seen people utilize tf.Placeholder's instead of the Input Layer's returned Tensor. Something like:
a_holder = tf.placeholder(tf.float32, shape=(None, 2,2))
b_holder = tf.placeholder(tf.float32, shape=(None, 2,2))
c_holder = tf.placeholder(tf.float32, shape=(None, 2,2))
model = create_graph()
con_cat, ab_add = model( [a_holder, b_holder, c_holder])
sess = k.backend.get_session()
a_in = rand_array() # 2x2 numpy arrays
b_in = rand_array()
c_in = rand_array()
a_in = np.reshape( a_in, (1,2,2))
b_in = np.reshape( b_in, (1,2,2))
c_in = np.reshape( c_in, (1,2,2))
val_cat, val_add = sess.run([con_cat, ab_add],
feed_dict={ a_holder: a_in, b_holder: b_in, c_holder: c_in})
Input() returns a handle to created placeholder, and does not create other tf operators; Tensor stands for both output of operations and placeholders so there is no contradiction.
To analyse what exactly is created by Input() lets run following code:
with tf.name_scope("INPUT_LAYER"):
input_l = Input(shape = [n_features])
Then:
writer = tf.summary.FileWriter('./my_graph', tf.get_default_graph())
writer.close()
And launch Tensorboard from your console:
tensorboard --logdir="./my_graph"
Look at the results:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With