Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

CUDA out of memory error, cannot reduce batch size

Tags:

python

pytorch

I want to run some experiments on my GPU device, but I get this error:

RuntimeError: CUDA out of memory. Tried to allocate 3.63 GiB (GPU 0; 15.90 GiB total capacity; 13.65 GiB already allocated; 1.57 GiB free; 13.68 GiB reserved in total by PyTorch)

I read about possible solutions here, and the common solution is this:

It is because of mini-batch of data does not fit onto GPU memory. Just decrease the batch size. When I set batch size = 256 for cifar10 dataset I got the same error; Then I set the batch size = 128, it is solved.

But in my case, it is a research project, and I want to have specific hyper-parameters and I can not reduce anything such as batch size.

Does anyone have a solution for this?

like image 778
b.j Avatar asked Jul 22 '21 04:07

b.j


People also ask

Why is cuda out of memory?

In my model, it appears that “cuda runtime error(2): out of memory” is occurring due to a GPU memory drain. Because PyTorch typically manages large amounts of data, failure to recognize small errors can cause your program to crash to the ground without all its GPU being available.


Video Answer


2 Answers

As long as a single sample can fit into GPU memory, you do not have to reduce the effective batch size: you can do gradient accumulation. Instead of updating the weights after every iteration (based on gradients computed from a too-small mini-batch) you can accumulate the gradients for several mini-batches and only when seeing enough examples, only then updating the weights.
This is nicely explained in this video.

Effectively, your training code would look something like this. Suppose your large batch size is large_batch, but can only fit small_batch into GPU memory, such that large_batch = small_batch * k. Then you want to update the weights every k iterations:

train_data = DataLoader(train_set, batch_size=small_batch, ...)

opt.zero_grad()  # this signifies the start of a large_batch
for i, (x, y) in train_data:
  pred = model(x)
  loss = criterion(pred, y)
  loss.backward()  # gradeints computed for small_batch
  if (i+1) % k == 0 or (i+1) == len(train_data):
    opt.step()  # update the weights only after accumulating k small batches
    opt.zero_grad()  # reset gradients for accumulation for the next large_batch
like image 199
Shai Avatar answered Oct 20 '22 02:10

Shai


Shai's answer is suitable, but I want to offer another solution. Recently, I've been observing awesome results from Nvidia AMP - Automatic Mixed Precision, which is a nice combination of the advantages of fp16 vs fp32. A positive side effect is that it significantly speeds up training as well.

It's only a single line of code in tensorflow: opt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt)

More details here

You can also stack AMP with Shai's solution.

like image 5
Stanley Zheng Avatar answered Oct 20 '22 01:10

Stanley Zheng