Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is BufferedReader read() much slower than readLine()?

I need to read a file one character at a time and I'm using the read() method from BufferedReader. *

I found that read() is about 10x slower than readLine(). Is this expected? Or am I doing something wrong?

Here's a benchmark with Java 7. The input test file has about 5 million lines and 254 million characters (~242 MB) **:

The read() method takes about 7000 ms to read all the characters:

@Test public void testRead() throws IOException, UnindexableFastaFileException{      BufferedReader fa= new BufferedReader(new FileReader(new File("chr1.fa")));      long t0= System.currentTimeMillis();     int c;     while( (c = fa.read()) != -1 ){         //     }     long t1= System.currentTimeMillis();     System.err.println(t1-t0); // ~ 7000 ms  } 

The readLine() method takes only ~700 ms:

@Test public void testReadLine() throws IOException{      BufferedReader fa= new BufferedReader(new FileReader(new File("chr1.fa")));      String line;     long t0= System.currentTimeMillis();     while( (line = fa.readLine()) != null ){         //     }     long t1= System.currentTimeMillis();     System.err.println(t1-t0); // ~ 700 ms } 

* Practical purpose: I need to know the length of each line, including the newline characters (\n or \r\n) AND the line length after stripping them. I also need to know if a line starts with the > character. For a given file this is done only once at the start of the program. Since EOL chars are not returned by BufferedReader.readLine() I'm resorting on the read() method. If there are better ways of doing this, please say.

** The gzipped file is here http://hgdownload.cse.ucsc.edu/goldenpath/hg19/chromosomes/chr1.fa.gz. For those who may be wondering, I'm writing a class to index fasta files.

like image 498
dariober Avatar asked Dec 25 '16 20:12

dariober


People also ask

Why is BufferedReader faster?

BufferedReader is a bit faster as compared to scanner because the scanner does the parsing of input data and BufferedReader simply reads a sequence of characters.

Why BufferedReader is faster than FileReader?

Whereas, BufferedReader creates a buffer, and reads large amount of data using the given FileReader and loads it into a input buffer in the memory. Each time you want to read the file data, you can read it from the buffer( you don't have to directly access the disk drive every time) and hence is faster than FileReader.

How fast is BufferedReader in Java?

In an earlier post, I asked how fast the getline function in C++ could run through the lines in a text file. The answer was about 2 GB/s, certainly over 1 GB/s.

How does BufferedReader readLine work?

Class BufferedReader. Reads text from a character-input stream, buffering characters so as to provide for the efficient reading of characters, arrays, and lines. The buffer size may be specified, or the default size may be used. The default is large enough for most purposes.


1 Answers

The important thing when analyzing performance is to have a valid benchmark before you start. So let's start with a simple JMH benchmark that shows what our expected performance after warmup would be.

One thing we have to consider is that since modern operating systems like to cache file data that is accessed regularly we need some way to clear the caches between tests. On Windows there's a small little utility that does just this - on Linux you should be able to do it by writing to some pseudo file somewhere.

The code then looks as follows:

import org.openjdk.jmh.annotations.Benchmark; import org.openjdk.jmh.annotations.BenchmarkMode; import org.openjdk.jmh.annotations.Fork; import org.openjdk.jmh.annotations.Mode;  import java.io.BufferedReader; import java.io.FileReader; import java.io.IOException;  @BenchmarkMode(Mode.AverageTime) @Fork(1) public class IoPerformanceBenchmark {     private static final String FILE_PATH = "test.fa";      @Benchmark     public int readTest() throws IOException, InterruptedException {         clearFileCaches();         int result = 0;         try (BufferedReader reader = new BufferedReader(new FileReader(FILE_PATH))) {             int value;             while ((value = reader.read()) != -1) {                 result += value;             }         }         return result;     }      @Benchmark     public int readLineTest() throws IOException, InterruptedException {         clearFileCaches();         int result = 0;         try (BufferedReader reader = new BufferedReader(new FileReader(FILE_PATH))) {             String line;             while ((line = reader.readLine()) != null) {                 result += line.chars().sum();             }         }         return result;     }      private void clearFileCaches() throws IOException, InterruptedException {         ProcessBuilder pb = new ProcessBuilder("EmptyStandbyList.exe", "standbylist");         pb.inheritIO();         pb.start().waitFor();     } } 

and if we run it with

chcp 65001 # set codepage to utf-8 mvn clean install; java "-Dfile.encoding=UTF-8" -server -jar .\target\benchmarks.jar 

we get the following results (about 2 seconds are needed to clear the caches for me and I'm running this on a HDD so that's why it's a good deal slower than for you):

Benchmark                            Mode  Cnt  Score   Error  Units IoPerformanceBenchmark.readLineTest  avgt   20  3.749 ± 0.039   s/op IoPerformanceBenchmark.readTest      avgt   20  3.745 ± 0.023   s/op 

Surprise! As expected there's no performance difference here at all after the JVM has settled into a stable mode. But there is one outlier in the readCharTest method:

# Warmup Iteration   1: 6.186 s/op # Warmup Iteration   2: 3.744 s/op 

which is exaclty the problem you're seeing. The most likely reason I can think of is that OSR isn't doing a good job here or that the JIT is only running too late to make a difference on the first iteration.

Depending on your use case this might be a big problem or negligible (if you're reading a thousand files it won't matter, if you're only reading one this is a problem).

Solving such a problem is not easy and there are no general solutions, although there are ways to handle this. One easy test to see if we're on the right track is to run the code with the -Xcomp option which forces HotSpot to compile every method on the first invocation. And indeed doing so, causes the large delay at the first invocation to disappear:

# Warmup Iteration   1: 3.965 s/op # Warmup Iteration   2: 3.753 s/op 

Possible solution

Now that we have a good idea what the actual problem is (my guess is still all those locks neither being coalesced nor using the efficient biased locks implementation), the solution is rather straight forward and simple: Reduce the number of function calls (so yes we could've arrived at this solution without everything above, but it's always nice to have a good grip on the problem and there might have been a solution that didn't involve changing much code).

The following code runs consistently faster than either of the other two - you can play with the array size but it's surprisingly unimportant (presumably because contrary to the other methods read(char[]) does not have to acquire a lock so the cost per call is lower to begin with).

private static final int BUFFER_SIZE = 256; private char[] arr = new char[BUFFER_SIZE];  @Benchmark public int readArrayTest() throws IOException, InterruptedException {     clearFileCaches();     int result = 0;     try (BufferedReader reader = new BufferedReader(new FileReader(FILE_PATH))) {         int charsRead;         while ((charsRead = reader.read(arr)) != -1) {             for (int i = 0; i < charsRead; i++) {                 result += arr[i];             }         }     }     return result; }  

This is most likely good enough performance wise, but if you wanted to improve performance even further using a file mapping might (wouldn't count on too large an improvement in a case such as this, but if you know that your text is always ASCII, you could make some further optimizations) further help performance.

like image 162
Voo Avatar answered Sep 25 '22 04:09

Voo