I've been reading the tutorials on TensorFlow where they have written
with tf.name_scope('read_inputs') as scope: # something
The example
a = tf.constant(5)
and
with tf.name_scope('s1') as scope: a = tf.constant(5)
seem to have the same effect. So, why do we use name_scope
?
This context manager pushes a name scope, which will make the name of all operations added within it have a prefix.
Variable scope allows you to create new variables and to share already created ones while providing checks to not create or share by accident. For details, see the Variable Scope How To, here we present only a few basic examples. The Variable Scope works as expected when the Eager Execution is Disabled.
They are not the same thing.
import tensorflow as tf c1 = tf.constant(42) with tf.name_scope('s1'): c2 = tf.constant(42) print(c1.name) print(c2.name)
prints
Const:0 s1/Const:0
So as the name suggests, the scope functions create a scope for the names of the ops you create inside. This has an effect on how you refer to tensors, on reuse, on how the graph shows in TensorBoard and so on.
I don't see the use case for reusing constants but here is some relevant information on scopes and variable sharing.
Scopes
name_scope
will add scope as a prefix to all operations
variable_scope
will add scope as a prefix to all variables and operations
Instantiating Variables
tf.Variable()
constructer prefixes variable name with current name_scope
and variable_scope
tf.get_variable()
constructor ignores name_scope
and only prefixes name with the current variable_scope
For example:
with tf.variable_scope("variable_scope"): with tf.name_scope("name_scope"): var1 = tf.get_variable("var1", [1]) with tf.variable_scope("variable_scope"): with tf.name_scope("name_scope"): var2 = tf.Variable([1], name="var2")
Produces
var1 = <tf.Variable 'variable_scope/var1:0' shape=(1,) dtype=float32_ref> var2 = <tf.Variable 'variable_scope/name_scope/var2:0' shape=(1,) dtype=string_ref>
Reusing Variables
Always use tf.variable_scope
to define the scope of a shared variable
The easiest way to do reuse variables is to use the reuse_variables()
as shown below
with tf.variable_scope("scope"): var1 = tf.get_variable("variable1",[1]) tf.get_variable_scope().reuse_variables() var2=tf.get_variable("variable1",[1]) assert var1 == var2
tf.Variable()
always creates a new variable, when a variable is constructed with an already used name it just appends _1
, _2
etc. to it - which can cause conflicts :(If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With