I have created a variable scope in one part of my graph, and later in another part of the graph I want to add OPs to an existing scope. That equates to this distilled example:
import tensorflow as tf
with tf.variable_scope('myscope'):
tf.Variable(1.0, name='var1')
with tf.variable_scope('myscope', reuse=True):
tf.Variable(2.0, name='var2')
print([n.name for n in tf.get_default_graph().as_graph_def().node])
Which yields:
['myscope/var1/initial_value',
'myscope/var1',
'myscope/var1/Assign',
'myscope/var1/read',
'myscope_1/var2/initial_value',
'myscope_1/var2',
'myscope_1/var2/Assign',
'myscope_1/var2/read']
My desired result is:
['myscope/var1/initial_value',
'myscope/var1',
'myscope/var1/Assign',
'myscope/var1/read',
'myscope/var2/initial_value',
'myscope/var2',
'myscope/var2/Assign',
'myscope/var2/read']
I saw this question which didn't seem to have an answer that addressed the question directly: TensorFlow, how to reuse a variable scope name
tf.variable_scope(scope_name, reuse) reuse can assign three values. None: it means tf.variable_scope () inherit parent variable scope reuse mode, if parent variable scope can reuse variables, it also can. True: it make variable scope and sub scopes to a reuse mode if sub scopes have set reuse = None.
TensorFlow tf.variable_scope () can create a context manager to manage tensorflow variables in it. We can use it to share variables or create some same name variables. In this tutorial, we will illustrate you how to use it correctly. tf.variable_scope () is a tensorflow class, not a function.
reuse = tf.AUTO_REUSE is often used in tensorflow application, it create a new variable or return an existing one. In this example, w1 = w2. Because we create a variable with name ‘ w ‘, then we get this existing variable. In general, you should use tf.AUTO_REUSE, it can make you avoid many errors.
Placing variables and tensors For better performance, TensorFlow will attempt to place tensors and variables on the fastest device compatible with its dtype. This means most variables are placed on a GPU if one is available. However, you can override this. In this snippet, place a float tensor and a variable on the CPU, even if a GPU is available.
Here is one straightforward way to do this using as
with somename
in a context manager. Using this somename.original_name_scope
property, you can retrieve that scope and then add more variables to it. Below is an illustration:
In [6]: with tf.variable_scope('myscope') as ms1:
...: tf.Variable(1.0, name='var1')
...:
...: with tf.variable_scope(ms1.original_name_scope) as ms2:
...: tf.Variable(2.0, name='var2')
...:
...: print([n.name for n in tf.get_default_graph().as_graph_def().node])
...:
['myscope/var1/initial_value',
'myscope/var1',
'myscope/var1/Assign',
'myscope/var1/read',
'myscope/var2/initial_value',
'myscope/var2',
'myscope/var2/Assign',
'myscope/var2/read']
Remark
Please also note that setting reuse=True
is optional; That is, even if you pass reuse=True
, you'd still get the same result.
Another way (thanks to OP himself!) is to just add /
at the end of the variable scope when reusing it as in the following example:
In [13]: with tf.variable_scope('myscope'):
...: tf.Variable(1.0, name='var1')
...:
...: # reuse variable scope by appending `/` to the target variable scope
...: with tf.variable_scope('myscope/', reuse=True):
...: tf.Variable(2.0, name='var2')
...:
...: print([n.name for n in tf.get_default_graph().as_graph_def().node])
...:
['myscope/var1/initial_value',
'myscope/var1',
'myscope/var1/Assign',
'myscope/var1/read',
'myscope/var2/initial_value',
'myscope/var2',
'myscope/var2/Assign',
'myscope/var2/read']
Remark:
Please note that setting reuse=True
is again optional; That is, even if you pass reuse=True
, you'd still get the same result.
Answer mentioned by kmario23 is correct but there is a tricky case with variables created by tf.get_variable
:
with tf.variable_scope('myscope'):
print(tf.get_variable('var1', shape=[3]))
with tf.variable_scope('myscope/'):
print(tf.get_variable('var2', shape=[3]))
This snippet will output:
<tf.Variable 'myscope/var1:0' shape=(3,) dtype=float32_ref>
<tf.Variable 'myscope//var2:0' shape=(3,) dtype=float32_ref>
It seems that tensorflow
has not provided a formal way to handle this circumstance yet. The only possible method I found is to manually assign the correct name (Warning: The correctness is not guaranteed):
with tf.variable_scope('myscope'):
print(tf.get_variable('var1', shape=[3]))
with tf.variable_scope('myscope/') as scope:
scope._name = 'myscope'
print(tf.get_variable('var2', shape=[3]))
And then we can get the correct names:
<tf.Variable 'myscope/var1:0' shape=(3,) dtype=float32_ref>
<tf.Variable 'myscope/var2:0' shape=(3,) dtype=float32_ref>
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With