import torch
import torch.nn as nn
from torch.optim import Adam
class NN_Network(nn.Module):
def __init__(self,in_dim,hid,out_dim):
super(NN_Network, self).__init__()
self.linear1 = nn.Linear(in_dim,hid)
self.linear2 = nn.Linear(hid,out_dim)
def forward(self, input_array):
h = self.linear1(input_array)
y_pred = self.linear2(h)
return y_pred
in_d = 5
hidn = 2
out_d = 3
net = NN_Network(in_d, hidn, out_d)
list(net.parameters())
The result was :
[Parameter containing:
tensor([[-0.2948, -0.1261, 0.2525, -0.4162, 0.3067],
[-0.2483, -0.3600, -0.4090, 0.0844, -0.2772]], requires_grad=True),
Parameter containing:
tensor([-0.2570, -0.3754], requires_grad=True),
Parameter containing:
tensor([[ 0.4550, -0.4577],
[ 0.1782, 0.2454],
[ 0.6931, -0.6003]], requires_grad=True),
Parameter containing:
tensor([ 0.4181, -0.2229, -0.5921], requires_grad=True)]
Without using nn.Parameter, list(net.parmeters()) results as a parameters.
What I am curious is that :
I didn't used nn.Parameter command, why does it results? And to check any network's layers' parameters, then is .parameters() only way to check it?
Maybe the result was self.linear1(in_dim,hid)'s weight, bias and so on, respectively.
But is there any way to check what it is?
Instead of .parameters(), you can use .named_parameters() to get more information about the model:
for name, param in net.named_parameters():
if param.requires_grad:
print(name, param.data)
Result:
linear1.weight tensor([[ 0.3727, 0.2522, 0.2381, 0.3115, 0.0656],
[-0.3322, 0.2024, 0.1089, -0.3370, 0.3917]])
linear1.bias tensor([-0.2089, 0.1105])
linear2.weight tensor([[-0.1090, 0.2564],
[-0.3957, 0.6632],
[-0.4036, 0.7066]])
linear2.bias tensor([ 0.1398, -0.0585, 0.4297])
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With