This is a pretty simple question that I just can't seem to figure out. I am working with an an output tensor of shape [100, 250]. I want to be able to access the 250 Dimensional array at any spot along the hundred and modify them separately. The tensorflow mathematical tools that I've found either do element-wise modification or scalar modification on the entire tensor. However, I am trying to do scalar modification on subsets of the tensor.
EDIT:
Here is the numpy code that I would like to recreate with tensorflow methods:
update = sess.run(y, feed_dict={x: batch_xs})
for i in range(len(update)):
update[i] = update[i]/np.sqrt(np.sum(np.square(update[i])))
update[i] = update[i] * magnitude
This for loop follows this formula in 250-D instead of 3-D . I then multiply each unit vector by magnitude to re-shape it to my desired length.
So update here is the numpy [100, 250] dimensional output. I want to transform each 250 dimensional vector into its unit vector. That way I can change its length to a magnitude of my choosing. Using this numpy code, if I run my train_step and pass update into one of my placeholders
sess.run(train_step, feed_dict={x: batch_xs, prediction: output})
it returns the error:
No gradients provided for any variable
This is because I've done the math in numpy and ported it back into tensorflow. Here is a related stackoverflow question that did not get answered.
the tf.nn.l2_normalize is very close to what I am looking for, but it divides by the square root of the maximum sum of squares. Whereas I am trying to divide each vector by its own sum of squares.
Thanks!
There is no real trick here, you can do as in numpy.
The only thing to make sure is that norm
is of shape [100, 1]
so that it broadcasts well in the division x / norm
.
x = tf.ones([100, 250])
norm = tf.sqrt(tf.reduce_sum(tf.square(x), axis=1, keepdims=True))
assert norm.shape == [100, 1]
res = x / norm
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With