I was curious what kind of buffer sizes write() and read() could handle on Linux/OSX/FreeBSD, so I started playing around with dumb programs like the following:
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
#include <string.h>
#include <sys/stat.h>
int main( void ) {
size_t s = 8*1024*1024 - 16*1024;
while( 1 ) {
s += 1024;
int f = open( "test.txt", O_CREAT | O_WRONLY | O_TRUNC, S_IRUSR | S_IWUSR | S_IXUSR );
char mem[s];
size_t written = write( f, &mem[0], s );
close( f );
printf( "(%ld) %lu\n", sizeof(size_t), written );
}
return 0;
}
This allowed me to test how close to a seeming "8MB barrier" I could get before segfaulting. Somewhere around the 8MB mark, my program dies, here's an example output:
(8) 8373248
(8) 8374272
(8) 8375296
(8) 8376320
(8) 8377344
(8) 8378368
(8) 8379392
(8) 8380416
(8) 8381440
(8) 8382464
Segmentation fault: 11
This is the same on OSX and Linux, however my FreeBSD VM is not only much faster at running this test, it also can go on for quite a ways! I've successfully tested it up to 511MB, which is just a ridiculous amount of data to write in one call.
What is it that makes the write() call segfault, and how can I figure out the maximum amount that I can possibly write() in a single call, without doing something ridiculous like I'm doing right now?
(Note, all three operating systems are 64-bit, OSX 10.7.3, Ubuntu 11.10, FreeBSD 9.0)
The fault isn't within write()
, it's a stack overflow. Try this:
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
#include <string.h>
#include <sys/stat.h>
int main( void )
{
void *mem;
size_t s = 512*1024*1024 - 16*1024;
while( 1 )
{
s += 1024;
int f = open( "test.txt", O_CREAT | O_WRONLY | O_TRUNC, S_IRUSR | S_IWUSR | S_IXUSR );
mem = malloc(s);
size_t written = write( f, mem, s );
free(mem);
close( f );
printf( "(%ld) %lu\n", sizeof(size_t), written );
}
return 0;
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With