Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PyTorch running out of memory: DefaultCPUAllocator can't allocate memory

I'm trying to optimize some weighs (weigts) in Pytorch but I keep getting this error:

RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 8000000000000 bytes. Error code 12 (Cannot allocate memory).

Namely, things blow up when I run (weights * col).sum() / weights.sum(). Weights is a tensor of size (1000000,1) and col is also a tensor of size (1000000, 1). Both tensors are decently sized, but it seems odd that I'm using up all the memory in my computer (8GB) for these operations.

like image 712
MUAS Avatar asked Jul 08 '20 01:07

MUAS


1 Answers

It could be that your weights and col tensors are not aligned (i.e. one of them is transposed so it is (1,1000000) instead of (1000000,1). Then when you do (weights * col) the shapes are broadcast together and it makes a tensor that is (1000000,1000000) which is probably where you are getting the extreme memory usage (as the resulting tensor is 1000000 times bigger than your original tensor).

like image 121
Dallin Clayton Avatar answered Oct 28 '22 14:10

Dallin Clayton