I got this error after I executed my code and it seems that the below portion of the code is throwing this error. I tried different ways but nothing could solve it. The error is given by the loss function.
for i, data in enumerate(train_loader, 0):
# import pdb;pdb.set_trace()
inputs, labels = data
print(type(inputs))
for input in inputs:
inputs = torch.Tensor(input)
inputs, labels= Variable(inputs), Variable(labels)
inputs=inputs.unsqueeze(1)
optimizer.zero_grad()
outputs = net(inputs)
#import pdb;pdb.set_trace()
loss_size = loss(outputs, labels)
loss_size.backward()
optimizer.step()
running_loss += loss_size.data[0]
total_train_loss += loss_size.data[0]
if (i + 1) % (print_every + 1) == 0:
print("Epoch {}, {:d}% \t train_loss: {:.2f} took: {:.2f}s".format(
epoch+1, int(100 * (i+1) / n_batches), running_loss / print_every, time.time() - start_time))
running_loss = 0.0
start_time = time.time()
--------------------------------------------------------------------------- IndexError Traceback (most recent call
last) <ipython-input-10-7d1b8710defa> in <module>
1 CNN = Net()
----> 2 trainNet(CNN, learning_rate=0.001)
3 #test()
<ipython-input-7-3208c0794681> in trainNet(net, learning_rate)
23 outputs = net(inputs)
24 #import pdb;pdb.set_trace()
---> 25 loss_size = loss(outputs, labels)
26 loss_size.backward()
27 optimizer.step()
~\Documents\Anaconda3\lib\site-packages\torch\nn\modules\module.py in
__call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~\Documents\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in
forward(self, input, target)
914 def forward(self, input, target):
915 return F.cross_entropy(input, target, weight=self.weight,
--> 916 ignore_index=self.ignore_index, reduction=self.reduction)
917
918
~\Documents\Anaconda3\lib\site-packages\torch\nn\functional.py in
cross_entropy(input, target, weight, size_average, ignore_index,
reduce, reduction) 2019 if size_average is not None or reduce
is not None: 2020 reduction =
_Reduction.legacy_get_string(size_average, reduce)
-> 2021 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2022 2023
~\Documents\Anaconda3\lib\site-packages\torch\nn\functional.py in
nll_loss(input, target, weight, size_average, ignore_index, reduce,
reduction) 1836 .format(input.size(0),
target.size(0))) 1837 if dim == 2:
-> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target,
weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 2 is out of bounds.
IndexError: Target 2 is out of bounds.
IndexError: Target 3 is out of bounds. Your last (fully-connected) layer has an output size of 2. Therefore your model will output a tensor of shape [batchSize, 2]. This is passed to CrossEntropyLoss, so your targetsmust be a tensor of shape [batchSize] consisting of integer class labels that range from0to1.
IndexError: index 2 is out of bounds for axis 0 with size 2 Regarding the CPLEX docplex model for these MILP mathematical models, I would like to generalize the constraint #18 since the A depends on n. Here are the mathematical models. Here is the code.
---> 13 loss = criterion(outputs, targets) ... IndexError: Target 3 is out of bounds. Your last (fully-connected) layer has an output size of 2. Therefore your model will output a tensor of shape [batchSize, 2].
I faced the same problem. The problem was solved by changing the number of classes.
num_classes = 10 (changed to the actual class number, instead of 1)
You should change number of classes = 3.
You are probably having 1 and 2 as class labels and so you must be trying to set number of outputs in our model net class as 2 but it should be 3 because this is the way pytorch works. 2 class means you are having 0 and 1 as class labels. But since you are having 1,2 as class labels you should make it a 3 class (0,1,2) classification problem.
Let says this is your net class:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer_1 = nn.Linear(100, 10)
self.layer_2 = nn.Linear(10, 2)
def forward(self, x):
x = self.layer_1(x)
x = nn.relu(x)
x = self.layer_2(x)
x = nn.relu(x)
return x
So, you just modify layer_2 as follows :self.layer_2 = nn.Linear(10, 3)
This should work.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With