This is likely due to your system's overcommit handling mode.
In the default mode, 0
,
Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. The root is allowed to allocate slightly more memory in this mode. This is the default.
The exact heuristic used is not well explained here, but this is discussed more on Linux over commit heuristic and on this page.
You can check your current overcommit mode by running
$ cat /proc/sys/vm/overcommit_memory
0
In this case, you're allocating
>>> 156816 * 36 * 53806 / 1024.0**3
282.8939827680588
~282 GB and the kernel is saying well obviously there's no way I'm going to be able to commit that many physical pages to this, and it refuses the allocation.
If (as root) you run:
$ echo 1 > /proc/sys/vm/overcommit_memory
This will enable the "always overcommit" mode, and you'll find that indeed the system will allow you to make the allocation no matter how large it is (within 64-bit memory addressing at least).
I tested this myself on a machine with 32 GB of RAM. With overcommit mode 0
I also got a MemoryError
, but after changing it back to 1
it works:
>>> import numpy as np
>>> a = np.zeros((156816, 36, 53806), dtype='uint8')
>>> a.nbytes
303755101056
You can then go ahead and write to any location within the array, and the system will only allocate physical pages when you explicitly write to that page. So you can use this, with care, for sparse arrays.
I had this same problem on Window's and came across this solution. So if someone comes across this problem in Windows the solution for me was to increase the pagefile size, as it was a Memory overcommitment problem for me too.
Windows 8
Windows 10
Note: I did not have the enough memory on my system for the ~282GB in this example but for my particular case this worked.
EDIT
From here the suggested recommendations for page file size:
There is a formula for calculating the correct pagefile size. Initial size is one and a half (1.5) x the amount of total system memory. Maximum size is three (3) x the initial size. So let's say you have 4 GB (1 GB = 1,024 MB x 4 = 4,096 MB) of memory. The initial size would be 1.5 x 4,096 = 6,144 MB and the maximum size would be 3 x 6,144 = 18,432 MB.
Some things to keep in mind from here:
However, this does not take into consideration other important factors and system settings that may be unique to your computer. Again, let Windows choose what to use instead of relying on some arbitrary formula that worked on a different computer.
Also:
Increasing page file size may help prevent instabilities and crashing in Windows. However, a hard drive read/write times are much slower than what they would be if the data were in your computer memory. Having a larger page file is going to add extra work for your hard drive, causing everything else to run slower. Page file size should only be increased when encountering out-of-memory errors, and only as a temporary fix. A better solution is to adding more memory to the computer.
I came across this problem on Windows too. The solution for me was to switch from a 32-bit to a 64-bit version of Python. Indeed, a 32-bit software, like a 32-bit CPU, can adress a maximum of 4 GB of RAM (2^32). So if you have more than 4 GB of RAM, a 32-bit version cannot take advantage of it.
With a 64-bit version of Python (the one labeled x86-64 in the download page), the issue disappears.
You can check which version you have by entering the interpreter. I, with a 64-bit version, now have:
Python 3.7.5rc1 (tags/v3.7.5rc1:4082f600a5, Oct 1 2019, 20:28:14) [MSC v.1916 64 bit (AMD64)]
, where [MSC v.1916 64 bit (AMD64)] means "64-bit Python".
Sources :
Quora - memory error generated by large numpy array
Stackoverflow : 32 or 64-bit version of Python
Sometimes, this error pops up because of the kernel has reached its limit. Try to restart the kernel redo the necessary steps.
In my case, adding a dtype attribute changed dtype of the array to a smaller type(from float64 to uint8), decreasing array size enough to not throw MemoryError in Windows(64 bit).
from
mask = np.zeros(edges.shape)
to
mask = np.zeros(edges.shape,dtype='uint8')
change the data type to another one which uses less memory works. For me, I change the data type to numpy.uint8:
data['label'] = data['label'].astype(np.uint8)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With