I have created a virtual file system(very simular to fat) based on two files.
It store information about files allocation(actually it is not files but donnt cares about it)
Each record has following structure:
Each entry has fixed size and i have hashtable in memory which helps me to found position of each entry.
VD based on clusters. Each cluster has fixed size - 256 bytes. Last 4 bytes are pointer to next cluster in file chain.
The problem is very slow speed when im trying to read all files. How can i improve perfomance? Is there any tips to fast reading from hard drive.
For example: Is it good idea to read file by big blocks? When i read even a little part of file, file is cached by OS right? And in next time i just take data from memory not from HD?
Well i have a few such questions where i can get answers?
Some options;
You could enlarge your cluster size (256 bytes is small, most OS'es use 4KB+ for a cluster these days)
If you read all files, you could do a sort based on startCluster so you read files in the order that they are physically close to each other on disk. So whenever the OS fetches a 4K+ block it is more likely you will need other parts of it with the next file.
You could defrag your virtual disk file
You seem confident it is a diskread issue. Did you check that what you do with the file after you just read it isn't the slow part?
A lot of random access is where SSD storage shines. Move the virtual disk to an SSD
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With