Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why does mmap() fail with ENOMEM on a 1TB sparse file?

Tags:

c

linux

mmap

I've been working with large sparse files on openSUSE 11.2 x86_64. When I try to mmap() a 1TB sparse file, it fails with ENOMEM. I would have thought that the 64 bit address space would be adequate to map in a terabyte, but it seems not. Experimenting further, a 1GB file works fine, but a 2GB file (and anything bigger) fails. I'm guessing there might be a setting somewhere to tweak, but an extensive search turns up nothing.

Here's some sample code that shows the problem - any clues?

#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <unistd.h>

int main(int argc, char *argv[]) {
    char * filename = argv[1];
    int fd;
    off_t size = 1UL << 40; // 30 == 1GB, 40 == 1TB

    fd = open(filename, O_RDWR | O_CREAT | O_TRUNC, 0666);
    ftruncate(fd, size);
    printf("Created %ld byte sparse file\n", size);

    char * buffer = (char *)mmap(NULL, (size_t)size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
    if ( buffer == MAP_FAILED ) {
        perror("mmap");
        exit(1);
    }
    printf("Done mmap - returned 0x0%lx\n", (unsigned long)buffer);

    strcpy( buffer, "cafebabe" );
    printf("Wrote to start\n");

    strcpy( buffer + (size - 9), "deadbeef" );
    printf("Wrote to end\n");

    if ( munmap(buffer, (size_t)size) < 0 ) {
        perror("munmap");
        exit(1);
    }
    close(fd);

    return 0;
}
like image 993
metadaddy Avatar asked May 26 '10 03:05

metadaddy


3 Answers

The problem was that the per-process virtual memory limit was set to only 1.7GB. ulimit -v 1610612736 set it to 1.5TB and my mmap() call succeeded. Thanks, bmargulies, for the hint to try ulimit -a!

like image 197
metadaddy Avatar answered Oct 04 '22 00:10

metadaddy


Is there some sort of per-user quota, limiting the amount of memory available to a user process?

like image 24
Martin Beckett Avatar answered Oct 04 '22 01:10

Martin Beckett


My guess is the the kernel is having difficulty allocating the memory that it needs to keep up with this memory mapping. I don't know how swapped out pages are kept up with in the Linux kernel (and I assume that most of the file would be in the swapped out state most of the time), but it may end up needing an entry for each page of memory that the file takes up in a table. Since this file might be mmapped by more than one process the kernel has to keep up with the mapping from the process's point of view, which would map to another point of view, which would map to secondary storage (and include fields for device and location).

This would fit into your addressable space, but might not fit (at least contiguously) within physical memory.

If anyone knows more about how Linux does this I'd be interested to hear about it.

like image 45
nategoose Avatar answered Oct 03 '22 23:10

nategoose