Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Performance drop with fputs after writing more than 2,5GB. Why?

Currently I'm working on a small program that reads large files and sorts them. After some benchmarking I stumbled upon a weird performance issue. When the input file got to large the writing of the output file took longer than the actual sorting. So I went deeper into the code and finally realized that the fputs-function might be the problem. So I wrote this little benchmarking programm.

#include "stdio.h"
#include "ctime"

int main()
{
    int i;
    const int linecount = 50000000;
    //Test Line with 184 byte
    const char* dummyline = "THIS IS A LONG TEST LINE JUST TO SHOW THAT THE WRITER IS GUILTY OF GETTING SLOW AFTER A CERTAIN AMOUNT OF DATA THAT HAS BEEN WRITTEN. hkgjhkdsfjhgk jhksjdhfkjh skdjfhk jshdkfjhksjdhf\r\n";
    clock_t start = clock();
    clock_t last = start;

    FILE* fp1 = fopen("D:\\largeTestFile.txt", "w");
    for(i=0; i<linecount; i++){
        fputs(dummyline, fp1);
        if(i%100000==0){
            printf("%i Lines written.\r", i);
            if(i%1000000 == 0){
                clock_t ms = clock()-last;
                printf("Writting of %i Lines took %i ms\n", i, ms);
                last = clock();
            }
        }
    }
    printf("%i Lines written.\n", i);
    fclose(fp1);
    clock_t ms = clock()-start;
    printf("Writting of %i Lines took %i ms\n", i, ms);

}

When you execute the programm, you can see a clear drop of performance after about 14 to 15 mio lines which is about 2.5GB of data. The writing takes about 3 times as long as before. The threshold of 2GB indicate a 64bit issue, but I haven't found anything about that in the web. I also tested if there is a difference between binary and character-mode (e.g. "wb" and "w"), but there is none. I also tried to preallocate the filesize (to avoid file fragmentation) by seeking to the expected end and writing a zerobyte, but that had also little to no effect.

I'm running a Windows 7 64bit machine but I've tested it on a Windows Server 2008 64bit R1 machine as well. Currently I'm testing on a NTFS filesystem with more than 200GB of free space. My system has 16GB of RAM so that shouldn't be a problem either. The testprogram only uses about 700KB. The page faults, which I suspected earlier, are also very low (~400 page faults during whole runtime).

I know that for such large data the fwrite()-function would suite the task better, but at the moment I'm interested if there is another workaround and why this is happening. Any help would be highly appreciated.

like image 320
Aceonline Avatar asked Nov 10 '11 08:11

Aceonline


1 Answers

The main reason for all this is a Windows disk cache. Then your program eats all RAM for it, then swapping begins, and thus, slowdowns. To fight these you need to:

1) Open file in commit mode using c flag:

FILE* fp1 = fopen("D:\\largeTestFile.txt", "wc");

2) Periodically write buffer to disk using flush function:

if(i%1000000 == 0)
{
    // write content to disk
    fflush(fp1);

    clock_t ms = clock()-last;
    printf("Writting of %i Lines took %i ms\n", i, ms);
    last = clock();
}

This way you will use reasonable amount of disk cache. Speed will be basically limited by the speed of your hard drive.

like image 81
Petr Abdulin Avatar answered Nov 10 '22 03:11

Petr Abdulin