I came across a situation where I have a pretty big file that I need to read binary data from.
Consequently, I realized that the default BinaryReader implementation in .NET is pretty slow. Upon looking at it with .NET Reflector I came across this:
public virtual int ReadInt32()
{
if (this.m_isMemoryStream)
{
MemoryStream stream = this.m_stream as MemoryStream;
return stream.InternalReadInt32();
}
this.FillBuffer(4);
return (((this.m_buffer[0] | (this.m_buffer[1] << 8)) | (this.m_buffer[2] << 0x10)) | (this.m_buffer[3] << 0x18));
}
Which strikes me as extremely inefficient, thinking at how computers were designed to work with 32-bit values since the 32 bit CPU was invented.
So I made my own (unsafe) FastBinaryReader class with code such as this instead:
public unsafe class FastBinaryReader :IDisposable
{
private static byte[] buffer = new byte[50];
//private Stream baseStream;
public Stream BaseStream { get; private set; }
public FastBinaryReader(Stream input)
{
BaseStream = input;
}
public int ReadInt32()
{
BaseStream.Read(buffer, 0, 4);
fixed (byte* numRef = &(buffer[0]))
{
return *(((int*)numRef));
}
}
...
}
Which is much faster - I managed to shave off 5-7 seconds off the time it took to read a 500 MB file, but it's still pretty slow overall (29 seconds initially and ~22 seconds now with my FastBinaryReader
).
It still kind of baffles me as to why it still takes so long to read such a relatively small file. If I copy the file from one disk to another it takes only a couple of seconds, so disk throughput is not an issue.
I further inlined the ReadInt32, etc. calls, and I ended up with this code:
using (var br = new FastBinaryReader(new FileStream(cacheFilePath, FileMode.Open, FileAccess.Read, FileShare.Read, 0x10000, FileOptions.SequentialScan)))
while (br.BaseStream.Position < br.BaseStream.Length)
{
var doc = DocumentData.Deserialize(br);
docData[doc.InternalId] = doc;
}
}
public static DocumentData Deserialize(FastBinaryReader reader)
{
byte[] buffer = new byte[4 + 4 + 8 + 4 + 4 + 1 + 4];
reader.BaseStream.Read(buffer, 0, buffer.Length);
DocumentData data = new DocumentData();
fixed (byte* numRef = &(buffer[0]))
{
data.InternalId = *((int*)&(numRef[0]));
data.b = *((int*)&(numRef[4]));
data.c = *((long*)&(numRef[8]));
data.d = *((float*)&(numRef[16]));
data.e = *((float*)&(numRef[20]));
data.f = numRef[24];
data.g = *((int*)&(numRef[25]));
}
return data;
}
Any further ideas on how to make this even faster? I was thinking maybe I could use marshalling to map the entire file straight into memory on top of some custom structure, since the data is linear, fixed size and sequential.
SOLVED: I came to the conclusion that FileStream's buffering/BufferedStream are flawed. Please see the accepted answer and my own answer (with the solution) below.
I ran into a similar performance issue with BinaryReader/FileStream, and after profiling, I discovered that the problem isn't with FileStream
buffering, but instead with this line:
while (br.BaseStream.Position < br.BaseStream.Length) {
Specifically, the property br.BaseStream.Length
on a FileStream
makes a (relatively) slow system call to get the file size on each loop. After changing the code to this:
long length = br.BaseStream.Length;
while (br.BaseStream.Position < length) {
and using an appropriate buffer size for the FileStream
, I achieved similar performance to the MemoryStream
example.
Interesting, reading the whole file into a buffer and going through it in memory made a huge difference. This is at the cost of memory, but we have plenty.
This makes me think that the FileStream's (or BufferedStream's for that matter) buffer implementation is flawed, because no matter what size buffer I tried, performance still sucked.
using (var br = new FileStream(cacheFilePath, FileMode.Open, FileAccess.Read, FileShare.Read, 0x10000, FileOptions.SequentialScan))
{
byte[] buffer = new byte[br.Length];
br.Read(buffer, 0, buffer.Length);
using (var memoryStream = new MemoryStream(buffer))
{
while (memoryStream.Position < memoryStream.Length)
{
var doc = DocumentData.Deserialize(memoryStream);
docData[doc.InternalId] = doc;
}
}
}
Down to 2-5 seconds (depends on disk cache I'm guessing) now from 22. Which is good enough for now.
When you do a filecopy, large chunks of data are read and written to disk.
You are reading the entire file four bytes at a time. This is bound to be slower. Even if the stream implementation is smart enough to buffer, you still have at least 500 MB/4 = 131072000 API calls.
Isn't it more wise to just read a large chunk of data, and then go through it sequentially, and repeat until the file has been processed?
One caveat; you might want to double-check your CPU's endianness... assuming little-endian is not quite safe (think: itanium etc).
You might also want to see if BufferedStream
makes any difference (I'm not sure it will).
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With