I am trying to implement a loss function in keras such as in following pseudo code
for i in range(N):
for j in range(N):
sum += some_calculations
but I read that tensor flow doesn't support such for loops hence I came to know about while_loop(cond, body, loop_vars) function from here
I understood the basic working of the while loop here hence I implemented the following code:
def body1(i):
global data
N = len(data)*positive_samples //Some length
j = tf.constant(0) //iterators
condition2 = lambda j, i :tf.less(j, N) //one condition only j should be less than N
tf.add(i, 1) //increment previous index i
result = 0
def body2(j, i):
global similarity_matrix, U, V
result = (tf.transpose(U[:, i])*V[:, j]) //U and V are 2-d tensor Variables and here only a column is extracted and their final product is a single value
return result
tf.while_loop(condition2, body2, loop_vars=[j, i])
return result
def loss_function(x):
global data
N = len(data)*positive_samples
i = tf.constant(0)
condition1 = lambda i : tf.less(i, N)
return tf.while_loop(condition1, body1, [i])
But when I am running this code i am getting an error
ValueError: The two structures don't have the same number of elements. First structure: [<tf.Tensor 'lambda_1/while/while/Identity:0' shape=() dtype=int32>, <tf.Tensor 'lambda_1/while/while/Identity_1:0' shape=() dtype=int32>], second structure: [0]
tf.while_loop
can be tricky to use, make sure to read the documentation carefully. The return value of the body must have the same structure than the loop variables, and the return value of the tf.while_loop
operation is the final value of the variables. In order to make a computation, you should pass an additional loop variable to store the partial results. You could do something like this:
def body1(i, result):
global data
N = len(data) * positive_samples
j = tf.constant(0)
condition2 = lambda j, i, result: tf.less(j, N)
result = 0
def body2(j, i, result):
global similarity_matrix, U, V
result_j = (tf.transpose(U[:, i]) * V[:, j])
return j + 1, i, result + result_j
j, i, result = tf.while_loop(condition2, body2, loop_vars=[j, i, result])
return i + 1, result
def loss_function(x):
global data
N = len(data)*positive_samples
i = tf.constant(0)
result = tf.constant(0, dtype=tf.float32)
condition1 = lambda i, result: tf.less(i, N)
i, result = tf.while_loop(condition1, body1, [i, result])
return result
It is not clear from your code where x
is to be used. In this case, though, the result of the operation should be equal to simply:
result = tf.reduce_sum(tf.linalg.matmul(U, V, transpose_a=True))
Which will also be much faster.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With