Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Pytorch AssertionError: Torch not compiled with CUDA enabled

I am trying to run code from this repo. I have disabled cuda by changing lines 39/40 in main.py from

parser.add_argument('--type', default='torch.cuda.FloatTensor', help='type of tensor - e.g torch.cuda.HalfTensor')

to

parser.add_argument('--type', default='torch.FloatTensor', help='type of tensor - e.g torch.HalfTensor')

Despite this, running the code gives me the following exception:

Traceback (most recent call last):
  File "main.py", line 190, in <module>
    main()
  File "main.py", line 178, in main
    model, train_data, training=True, optimizer=optimizer)
  File "main.py", line 135, in forward
    for i, (imgs, (captions, lengths)) in enumerate(data):
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 201, in __next__
    return self._process_next_batch(batch)
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 221, in _process_next_batch
    raise batch.exc_type(batch.exc_msg)
AssertionError: Traceback (most recent call last):
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 62, in _pin_memory_loop
    batch = pin_memory_batch(batch)
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 123, in pin_memory_batch
    return [pin_memory_batch(sample) for sample in batch]
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 123, in <listcomp>
    return [pin_memory_batch(sample) for sample in batch]
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 117, in pin_memory_batch
    return batch.pin_memory()
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/tensor.py", line 82, in pin_memory
    return type(self)().set_(storage.pin_memory()).view_as(self)
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/storage.py", line 83, in pin_memory
    allocator = torch.cuda._host_allocator()
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 220, in _host_allocator
    _lazy_init()
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 84, in _lazy_init
    _check_driver()
  File "/Users/lakshay/anaconda/lib/python3.6/site-packages/torch/cuda/__init__.py", line 51, in _check_driver
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Spent some time looking through the issues in the Pytorch github, to no avail. Help, please?

like image 318
Lakshay Sharma Avatar asked Nov 15 '17 16:11

Lakshay Sharma


People also ask

Is CUDA available PyTorch?

In PyTorch, the torch. cuda package has additional support for CUDA tensor types, that implement the same function as CPU tensors, but they utilize GPUs for computation. If you want a tensor to be on GPU you can call . cuda().

What does torch CUDA Is_available () do?

cuda. is_available. Returns a bool indicating if CUDA is currently available.


1 Answers

If you look into the data.py file, you can see the function:

def get_iterator(data, batch_size=32, max_length=30, shuffle=True, num_workers=4, pin_memory=True):
    cap, vocab = data
    return torch.utils.data.DataLoader(
        cap,
        batch_size=batch_size, shuffle=shuffle,
        collate_fn=create_batches(vocab, max_length),
        num_workers=num_workers, pin_memory=pin_memory)

which is called twice in main.py file to get an iterator for the train and dev data. If you see the DataLoader class in pytorch, there is a parameter called:

pin_memory (bool, optional) – If True, the data loader will copy tensors into CUDA pinned memory before returning them.

which is by default True in the get_iterator function. And as a result you are getting this error. You can simply pass the pin_memory param value as False when you are calling get_iterator function as follows.

train_data = get_iterator(get_coco_data(vocab, train=True),
                          batch_size=args.batch_size,
                          ...,
                          ...,
                          ...,
                          pin_memory=False)
like image 53
Wasi Ahmad Avatar answered Nov 15 '22 17:11

Wasi Ahmad