Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Understanding JAX argnums parameter in its gradient function

I'm trying to understand the behaviour of argnums in JAX's gradient function. Suppose I have the following function:

def make_mse(x, t):  
  def mse(w,b): 
    return np.sum(jnp.power(x.dot(w) + b - t, 2))/2
  return mse 

And I'm taking the gradient in the following way:

w_gradient, b_gradient = grad(make_mse(train_data, y), (0,1))(w,b)

argnums= (0,1) in this case, but what does it mean? With respect to which variables the gradient is calculated? What will be the difference if I will use argnums=0 instead? Also, can I use the same function to get the Hessian matrix?

I looked at JAX help section about it, but couldn't figure it out

like image 933
ValientProcess Avatar asked Apr 28 '26 10:04

ValientProcess


1 Answers

When you pass multiple argnums to grad, the result is a function that returns a tuple of gradients, equivalent to if you had computed each separately:

def f(x, y):
  return x ** 2 + x * y + y ** 2

df_dxy = grad(f, argnums=(0, 1))
df_dx = grad(f, argnums=0)
df_dy = grad(f, argnums=1)

x = 3.0
y = 4.25
assert df_dxy(x, y) == (df_dx(x, y), df_dy(x, y))

If you want to compute a mixed second derivatives, you can do this by repeatedly applying the gradient:

d2f_dxdy = grad(grad(f, argnums=0), argnums=1)
assert d2f_dxdy(x, y) == 1
like image 159
jakevdp Avatar answered Apr 30 '26 23:04

jakevdp



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!