Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Java file i/o throughput decline

I have a program in which each thread reads in files many lines at a time from a file, processes the lines, and writes the lines out to a different file. Four threads split the list of files to process among them. I'm having strange performance issues across two cases:

  • Four files with 50,000 lines each
    • Throughput starts at 700 lines/sec processed, declines to ~100 lines/sec
  • 30,000 files with 12 lines each
    • Throughput starts around 800 lines/sec and remains steady

This is internal software I'm working on so unfortunately I can't share any source code, but the main steps of the program are:

  1. Split list of files among four worker threads
  2. Start all threads.
  3. Thread reads up to 100 lines at once and stores in String[] array.
  4. Thread applies transformation to all lines in array.
  5. Thread writes lines to a file (not same as input file).
  6. 3-5 repeats for each thread until all files completely processed.

What I don't understand is why 30k files with 12 lines each gives me greater performance than a few files with many lines each. I would have expected that the overhead of opening and closing the files to be greater than that of reading a single file. In addition, the decline in performance of the former case is exponential in nature.

I've set the maximum heap size to 1024 MB and it appears to use 100 MB at most, so an overtaxed GC isn't the problem. Do you have any other ideas?

like image 347
A B Avatar asked May 04 '26 01:05

A B


2 Answers

From your numbers, I guess that GC is probably not the issue. I suspect that this is a normal behavior of a disk, being operated on by many concurrent threads. When the files are big, the disk has to switch context between the threads many times (producing significant disk seek time), and the overhead is apparent. With small files, maybe they are read as a single chunk with no extra seek time, so threads do not interfere with each other too much.

When working with a single, standard disk, serial IO is usually better that parallel IO.

like image 168
Eyal Schneider Avatar answered May 05 '26 16:05

Eyal Schneider


I am assuming that the files are located on the same disk, in which case you are probably thrashing the disk (or invalidating the disk\OS cache) with multiple threads attempting to read concurrently and write concurrently. A better pattern may be to have a dedicated reader\writer thread to handle IO, and then alter your pattern so that the job of transform (which sounds expensive) is handled by multiple threads. Your IO thread can fetch and overlap writing with the transform operations as results become available. This should stop disk thrashing, and balance the IO and CPU side of your pattern.

like image 26
Tim Lloyd Avatar answered May 05 '26 15:05

Tim Lloyd



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!