In following the Pytorch tutorial at https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html
I received the following error:
(pt_gpu) [martin@A08-R32-I196-3-FZ2LTP2 mlm]$ python pytorch-1.py 
Traceback (most recent call last):
  File "pytorch-1.py", line 39, in <module>
    device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
AttributeError: module 'torch' has no attribute 'device'
In my code below, I added this statement:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    net.to(device)
But this seems not right or enough. This is the first time for me to run Pytorch with GPU on a linux machine. What else should I do to get right running?
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)
    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x
net = Net()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
net.to(device)
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
print(transform)
trainSet = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainLoader = torch.utils.data.DataLoader(trainSet, batch_size=4, shuffle=True, num_workers=2)
testSet = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testLoader = torch.utils.data.DataLoader(testSet, batch_size=4, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2):
    running_loss = 0.0
    for i, data in enumerate(trainLoader, 0):
        inputs, labels = data
        inputs, labels = inputs.to(device), labels.to(device)
        optimizer.zero_grad()
        outputs = net(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
        if i % 2000 == 1999:
            print('[%d, %5d] loss %.3f' % (epoch + 1, i + 1, running_loss / 2000))
print('Finished traning!')
def imshow(img):
    img = img / 2 + 0.5
    npimg = img.numpy()
    plt.imshow(numpy.transpose(npimg, (1, 2, 0)))
    plt.show()
dataIter = iter(trainLoader)
images, labels = dataIter.next()
# imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
outputs = net(images)
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4)))
dataIter = iter(testLoader)
images, labels = dataIter.next()
# imshow(torchvision.utils.make_grid(images))
correct = 0
total = 0
with torch.no_grad():
    for data in testLoader:
        images, labels = data
        outputs = net(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()
print("accuracy: %d %%", 100 * correct / total)
EDIT:
My conda version is the latest:
(pt_gpu) [martin@A08-R32-I196-3-FZ2LTP2 mlm]$ conda -V
conda 4.6.2
Then I installed pytorch-gpu with:
(pt_gpu) [martin@A08-R32-I196-3-FZ2LTP2 mlm]$ conda install -c anaconda pytorch-gpu
As you can see, the version 0.1.12 is installed:
Collecting package metadata: done
Solving environment: done
## Package Plan ##
  environment location: /home/martin/anaconda3/envs/pt_gpu
  added / updated specs:
    - pytorch-gpu
The following packages will be downloaded:
    package                    |            build
    ---------------------------|-----------------
    ca-certificates-2018.12.5  |                0         123 KB  anaconda
    certifi-2018.11.29         |           py36_0         146 KB  anaconda
    pytorch-gpu-0.1.12         |           py36_0        16.8 MB  anaconda
    ------------------------------------------------------------
                                           Total:        17.0 MB
The following packages will be UPDATED:
  openssl              pkgs/main::openssl-1.1.1a-h7b6447c_0 --> anaconda::openssl-1.1.1-h7b6447c_0
The following packages will be SUPERSEDED by a higher-priority channel:
  ca-certificates                                 pkgs/main --> anaconda
  certifi                                         pkgs/main --> anaconda
  mkl                    pkgs/main::mkl-2017.0.4-h4c4d0af_0 --> anaconda::mkl-2017.0.1-0
  pytorch-gpu                                     pkgs/free --> anaconda
Proceed ([y]/n)? y
Downloading and Extracting Packages
certifi-2018.11.29   | 146 KB    | ########################################################################################################################## | 100% 
ca-certificates-2018 | 123 KB    | ########################################################################################################################## | 100% 
pytorch-gpu-0.1.12   | 16.8 MB   | ########################################################################################################################## | 100% 
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
To verify the version, I do:
(pt_gpu) [martin@A08-R32-I196-3-FZ2LTP2 mlm]$ python -c "import torch; print(torch.__version__)"
0.1.12
Why does it install such a low version?
Although this question is very old, I would recommend those who are facing this problem to visit pytorch.org and check the command to install pytorch from there, there is a section dedicated to this:
or in your case:
As you can see, the command you used to install pytorch is different from the one here. I have not tested it on Linux, but I used the command for Windows and it worked great for me on Anaconda. (Initially, I also got the same error, that was before following this)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With