I've trained a DCGAN model and would now like to load it into a library that visualizes the drivers of neuron activation through image space optimization.
The following code works, but forces me to work with (1, width, height, channels) images when doing subsequent image analysis, which is a pain (the library assumptions about the shape of network input).
# creating TensorFlow session and loading the model
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
new_saver = tf.train.import_meta_graph(model_fn)
new_saver.restore(sess, './')
I'd like to change the input_map, After reading the source, I expected this code to work:
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
t_input = tf.placeholder(np.float32, name='images') # define the input tensor
t_preprocessed = tf.expand_dims(t_input, 0)
new_saver = tf.train.import_meta_graph(model_fn, input_map={'images': t_input})
new_saver.restore(sess, './')
But got an error:
ValueError: tf.import_graph_def() requires a non-empty
name
ifinput_map
is used.
When the stack gets down to tf.import_graph_def()
the name field is set to import_scope, so I tried the following:
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
t_input = tf.placeholder(np.float32, name='images') # define the input tensor
t_preprocessed = tf.expand_dims(t_input, 0)
new_saver = tf.train.import_meta_graph(model_fn, input_map={'images': t_input}, import_scope='import')
new_saver.restore(sess, './')
Which netted me the following KeyError
:
KeyError: "The name 'gradients/discriminator/minibatch/map/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/RefEnter:0' refers to a Tensor which does not exist. The operation, 'gradients/discriminator/minibatch/map/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3/RefEnter', does not exist in the graph."
If I set 'import_scope', I get the same error whether or not I set 'input_map'.
I'm not sure where to go from here.
So, the main issue is that you're not using the syntax right. Check the documentation for tf.import_graph_def
for the use of input_map
(link).
Let's breakdown this line:
new_saver = tf.train.import_meta_graph(model_fn, input_map={'images': t_input}, import_scope='import')
You didn't outline what model_fn
is, but it needs to be a path to the file.
For the next part, in input_map
, you're saying: replace the input in the original graph (DCGAN) whose name
is images
with my variable (in the current graph) called t_input
. Problematically, t_input
and images
are referencing the same object in different ways as per this line:
t_input = tf.placeholder(np.float32, name='images')
In other words, images
in input_map
should actually be whatever the variable name is that you're trying to replace in the DCGAN graph. You'll have to import the graph in its base form (i.e., without the input_map
line) and figure out what the name of the variable you want to link to is. It'll be in the list returned by tf.get_collection('variables')
after you have imported the graph. Look for the dimensions (1, width, height, channels), but with the values in place of the variable names. If it's a placeholder, it'll look something like scope/Placeholder:0
where scope
is replaced with whatever the variable's scope is.
Word of caution:
Tensorflow is very finicky about what it expects graphs to look like. So, if in the original graph specification the width, height, and channels are explicitly specified, then Tensorflow will complain (throw an error) when you try to connect a placeholder
with a different set of dimensions. And, this makes sense. If the system was trained with some set of dimensions, then it only knows how to generate images with those dimensions.
In theory, you can still stick all kinds of weird stuff on the front of that network. But, you will need to scale it down so it meets those dimensions first (and the Tensorflow documentation says it's better to do that with the CPU outside of the graph; i.e., before inputing it with feed_dict
).
Hope that helps!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With