Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to scan through really huge files on disk?

Considering a really huge file(maybe more than 4GB) on disk,I want to scan through this file and calculate the times of a specific binary pattern occurs.

My thought is:

  1. Use memory-mapped file(CreateFileMap or boost mapped_file) to load the file to the virtual memory.

  2. For each 100MB mapped-memory,create one thread to scan and calculate the result.

Is this feasible?Are there any better method to do so?

Update:
Memory-mapped file would be a good choice,for scaning through a 1.6GB file could be handled within 11s.

thanks.

like image 756
Jichao Avatar asked Jan 31 '10 12:01

Jichao


2 Answers

Creating 20 threads, each supposing to handle some 100 MB of the file is likely to only worsen performance since The HD will have to read from several unrelated places at the same time.

HD performance is at its peak when it reads sequential data. So assuming your huge file is not fragmented, the best thing to do would probably be to use just one thread and read from start to end in chunks of a few (say 4) MB.

But what do I know. File systems and caches are complex. Do some testing and see what works best.

like image 99
shoosh Avatar answered Oct 01 '22 13:10

shoosh


Although you can use memory mapping, you don't have to. If you read the file sequentially in small chunks, say 1 MB each, the file will never be present in memory all at once.

If your search code is actually slower than your hard disk, you can still hand chunks off to worker threads if you like.

like image 26
Thomas Avatar answered Oct 01 '22 15:10

Thomas