Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python: Extend for loop operation to each row in matrix without using iteration

I have a piece of code like this:

a = Y[0]; b = Z[0]
print(a, b)
loss = 0
for i in range(len(a)):
    k = len(a)-i
    loss += (2**(k-1))*np.abs(a[i]-b[i])
print(loss)

Where Y and Z are of dimensions 250 x 10 and each row is 10 bit binary value. For example, print(a,b) prints this: [1 0 0 0 0 0 0 0 1 0] [0 0 0 1 1 1 1 1 0 0]

Now I want to apply the two line function inside the for loop for corresponding rows between Y and Z. But I don't want to do something like this:

for j in range(Y.shape[0]):
    a = Y[j]; b = Z[j]
    loss = 0
    for i in range(len(a)):
        k = len(a)-i
        loss += (2**(k-1))*np.abs(a[i]-b[i])
    print(loss)

I am essentially trying to make a custom loss function in keras/tensorflow. And that for loop example doesn't scale for large tensor operations. How do I do it with some sort of batch matrix operation instead of for loops?

like image 930
Jonathan Avatar asked Mar 16 '26 23:03

Jonathan


1 Answers

You could do this:

factor = 2**np.arange(Y.shape[1])[::-1]
loss = np.sum(factor * np.abs(Y-Z), axis=-1)
like image 152
fountainhead Avatar answered Mar 18 '26 13:03

fountainhead



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!