Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance of fwrite and write size

I'm writing out a large numerical 2 dimensional array to a binary file (final size ~75 MB).

I'm doing this on a linux system. First, is there a better method or syscall other than fwrite to write the file as fast as possible?

Second, if I should use fwrite, then should I just write the whole file as 1 contiguous line?

fwrite( buf, sizeof(float), 6700*6700, fp );

or write it as a series of chunks

fwrite( buf, sizeof(float), 8192, fp );
fwrite( *(buf+8192), sizeof(float), 8192, fp );
....

If I should chunk the writing, how big should each chunk be?

like image 695
Peter Smith Avatar asked Dec 03 '10 23:12

Peter Smith


2 Answers

I agree with miked and Jerome for the most part, but... only for a modern OS. If you are working embedded on a flash file system, there are some major exceptions. In this environment, if you suspect fwrite(), invest in a quick test using write() with large blocks.

Today, I found a 4x speed improvement moving to write(). This was due to a posix layer in the embedded OS that transcribed fwrite()s into fputc()s... a SYNC'd underlying flash file just trashes in this case. write() was implemented by routines far closer to the OS (Nucleus) in which the block writes were not broken into bytes.

just saying... if you question the two variants, probably best to just try'em out.

like image 172
charo Avatar answered Oct 22 '22 16:10

charo


Just use fwrite (no need to go lower level syscalls) and do it as one chunk. The lower level syscalls will figure out how to buffer and split up that write command the best. I've never been able to beat fwrite's performance on things like this - large sequential writes.

like image 44
miked Avatar answered Oct 22 '22 14:10

miked