This is a question for a linux kernel or sys admin guru.
I got this error from qemu, trying to start a VM with 3GB memory:
ioctl(KVM_CREATE_VM) failed: 12 Cannot allocate memory
failed to initialize KVM: Cannot allocate memory
To my knowledge this can be because there isn't enough memory or the commit limit is too low, but apparently not... There is 5.9 GB available by dumping caches and no commit limit:
$ free -m
total used free shared buff/cache available
Mem: 7696 1287 135 139 6274 5973
Swap: 2892 525 2367
$ cat /proc/sys/vm/overcommit_memory
1
So then I wrote a c++ program to allocate successively bigger chunks. I found that it failed to allocate more than 2.1 GB. (N.B. it was compiled as 64-bit.) That's consistent with Qemu not starting up, but WHY???
So then I modified it to write to the memory as well. This resulted in some caches being dumped, with free reporting this at about 2 GB allocated:
$ free -m
total used free shared buff/cache available
Mem: 7696 2988 288 143 4420 4268
Swap: 2892 525 2367
...and this when the program terminated:
$ free -m
total used free shared buff/cache available
Mem: 7696 1258 2253 147 4185 5994
Swap: 2892 525 2367
Now I tried to start Qemu and magically it worked! Free reports:
$ free -m
total used free shared buff/cache available
Mem: 7696 2438 4451 147 806 4834
Swap: 2892 530 2362
So it looks like the kernel wasn't willing to free some caches when asked for a big chunk, but was willing to let them go in several steps. What's going on?
I am running Debian testing: Linux - 4.11.0-1-amd64 #1 SMP Debian 4.11.6-1 (2017-06-19) x86_64 GNU/Linux
You can free your buffer/cache.
The safe way is to launch this command with root:
#sync; echo 3 > /proc/sys/vm/drop_caches
I found a workaround, which suggests that the problem is memory fragmentation. On a recent reoccurrence of the problem I ran the following command which fixed the problem:
sudo su -
echo 1 > /proc/sys/vm/compact_memory
Related post: https://unix.stackexchange.com/questions/44481/defragging-ram-oom-failure/147860#147860
Here is a dmesg dump from before the compaction, which might show more info about the problem, but I can't make sense of it:
[618172.910238] qemu-system-x86: page allocation failure: order:4, mode:0x16040c0(GFP_KERNEL|__GFP_COMP|__GFP_NOTRACK), nodemask=(null) [618172.910244] qemu-system-x86 cpuset=/ mems_allowed=0 [618172.910248] CPU: 1 PID: 19454 Comm: qemu-system-x86 Not tainted 4.13.0-1-amd64 #1 Debian 4.13.13-1 [618172.910249] Hardware name: System manufacturer System Product Name/P8Z68-V LX, BIOS 4105 07/01/2013 [618172.910250] Call Trace: [618172.910256] ? dump_stack+0x5c/0x85 [618172.910257] ? warn_alloc+0x114/0x1b0 [618172.910259] ? __alloc_pages_direct_compact+0x4a/0xf0 [618172.910261] ? __alloc_pages_slowpath+0xd57/0xd60 [618172.910261] ? __alloc_pages_slowpath+0xd57/0xd60 [618172.910263] ? __alloc_pages_nodemask+0x228/0x250 [618172.910266] ? cache_grow_begin+0x80/0x530 [618172.910267] ? cache_grow_begin+0x80/0x530 [618172.910269] ? fallback_alloc+0x161/0x200 [618172.910271] ? kmem_cache_alloc_trace+0x1c3/0x5a0 [618172.910292] ? kvm_dev_ioctl+0xb6/0x6b0 [kvm] [618172.910305] ? do_vfs_ioctl+0x9f/0x600 [618172.910306] ? SyS_ioctl+0x74/0x80 [618172.910308] ? system_call_fast_compare_end+0xc/0x97 [618172.910309] Mem-Info: [618172.910313] active_anon:337090 inactive_anon:152155 isolated_anon:0
active_file:140849 inactive_file:198001 isolated_file:0
unevictable:28 dirty:8 writeback:0 unstable:0
slab_reclaimable:773968 slab_unreclaimable:14206
mapped:112839 shmem:69965 pagetables:12329 bounce:0
free:320850 free_pcp:0 free_cma:0 [618172.910315] Node 0 active_anon:1348360kB inactive_anon:608620kB active_file:563396kB inactive_file:792004kB unevictable:112kB isolated(anon):0kB isolated(file):0kB mapped:451356kB dirty:32kB writeback:0kB shmem:279860kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 215040kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no [618172.910315] Node 0 DMA free:15900kB min:136kB low:168kB high:200kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15984kB managed:15900kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB [618172.910317] lowmem_reserve[]: 0 3200 7657 7657 7657 [618172.910319] Node 0 DMA32 free:1205068kB min:28184kB low:35228kB high:42272kB active_anon:462188kB inactive_anon:167656kB active_file:159796kB inactive_file:164356kB unevictable:0kB writepending:0kB present:3362060kB managed:3296492kB mlocked:0kB kernel_stack:976kB pagetables:4164kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB [618172.910321] lowmem_reserve[]: 0 0 4457 4457 4457 [618172.910323] Node 0 Normal free:62432kB min:39260kB low:49072kB high:58884kB active_anon:886172kB inactive_anon:440964kB active_file:403600kB inactive_file:627648kB unevictable:112kB writepending:32kB present:4708352kB managed:4568752kB mlocked:112kB kernel_stack:12672kB pagetables:45152kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB [618172.910324] lowmem_reserve[]: 0 0 0 0 0 [618172.910326] Node 0 DMA: 1*4kB (U) 1*8kB (U) 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15900kB [618172.910332] Node 0 DMA32: 59795*4kB (UME) 67824*8kB (UME) 21933*16kB (UME) 2275*32kB (UE) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1205500kB [618172.910337] Node 0 Normal: 1460*4kB (UME) 1303*8kB (UME) 2698*16kB (UME) 104*32kB (UME) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 62760kB [618172.910343] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB [618172.910343] 384298 total pagecache pages [618172.910344] 9802 pages in swap cache [618172.910345] Swap cache stats: add 1402693, delete 1392949, find 861777/1070392 [618172.910345] Free swap = 1537612kB [618172.910345] Total swap = 2962428kB [618172.910346] 2021599 pages RAM [618172.910346] 0 pages HighMem/MovableOnly [618172.910346] 51313 pages reserved [618172.910347] 0 pages hwpoisoned
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With