Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How could I limit the range of a variable in tensorflow

I want to train a model using tensorflow.

I have the following variable which I want the model to learn it

Mj=tf.get_variable('Mj_',dtype=tf.float32, shape=[500,4],initializer=tf.random_uniform_initializer(maxval=1, minval=0))

I want the resulted value of Mj to be between 0 and 1. How can I add this constraint?

like image 274
Abrar Avatar asked Oct 29 '17 21:10

Abrar


People also ask

How do I change the value of a tf variable?

Tensorflow variables represent the tensors whose values can be changed by running operations on them. The assign() is the method available in the Variable class which is used to assign the new tf. Tensor to the variable. The new value must have the same shape and dtype as the old Variable value.

What is retracing TensorFlow?

Retracing, which is when your Function creates more than one trace, helps ensures that TensorFlow generates correct graphs for each set of inputs. However, tracing is an expensive operation! If your Function retraces a new graph for every call, you'll find that your code executes more slowly than if you didn't use tf.


1 Answers

The proper way to do this would be to pass the clipping function tf.clip_by_value as the constraint argument to the tf.Variable constructor:

Mj=tf.get_variable('Mj_',
                   dtype=tf.float32,
                   shape=[500,4],
                   initializer=tf.random_uniform_initializer(maxval=1, minval=0),
                   constraint=lambda t: tf.clip_by_value(t, 0, 1))

From the docs of tf.Variable:

constraint: An optional projection function to be applied to the variable after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected Tensor representing the value of the variable and return the Tensor for the projected value (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.

Or you might want to consider simply adding a nonlinearity tf.sigmoid on top of your variable.

Mj=tf.get_variable('Mj_',dtype=tf.float32, shape=[500,4])
Mj_out=tf.sigmoid(Mj)

This will transform your variable to range between 0 and 1. Read more about activation functions here.

like image 154
dsalaj Avatar answered Sep 22 '22 13:09

dsalaj