Considering a really huge file(maybe more than 4GB) on disk,I want to scan through this file and calculate the times of a specific binary pattern occurs.
My thought is:
Use memory-mapped file(CreateFileMap or boost mapped_file) to load the file to the virtual memory.
For each 100MB mapped-memory,create one thread to scan and calculate the result.
Is this feasible?Are there any better method to do so?
Update:
Memory-mapped file would be a good choice,for scaning through a 1.6GB file could be handled within 11s.
thanks.
Creating 20 threads, each supposing to handle some 100 MB of the file is likely to only worsen performance since The HD will have to read from several unrelated places at the same time.
HD performance is at its peak when it reads sequential data. So assuming your huge file is not fragmented, the best thing to do would probably be to use just one thread and read from start to end in chunks of a few (say 4) MB.
But what do I know. File systems and caches are complex. Do some testing and see what works best.
Although you can use memory mapping, you don't have to. If you read the file sequentially in small chunks, say 1 MB each, the file will never be present in memory all at once.
If your search code is actually slower than your hard disk, you can still hand chunks off to worker threads if you like.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With