Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

One large file or multiple small files?

Tags:

performance

c

I have an application (currently written in Python as we iron out the specifics but eventually it will be written in C) that makes use of individual records stored in plain text files. We can't use a database and new records will need to be manually added regularly.

My question is this: would it be faster to have a single file (500k-1Mb) and have my application open, loop through, find and close a file OR would it be faster to have the records separated and named using some appropriate convention so that the application could simply loop over filenames to find the data it needs?

I know my question is quite general so direction to any good articles on the topic are as appreciated as much as suggestions.

Thanks very much in advance for your time, Dan

like image 807
Dan Avatar asked Apr 01 '10 12:04

Dan


1 Answers

Essentially your second approach is an index - it's just that you're building your index in the filesystem itself. There's nothing inherently wrong with this, and as long as you arrange things so that you don't get too many files in the one directory, it will be plenty fast.

You can achieve the "don't put too many files in the one directory" goal by using multiple levels of directories - for example, the record with key FOOBAR might be stored in data/F/FO/FOOBAR rather than just data/FOOBAR.

Alternatively, you can make the single-large-file perform as well by building an index file, that contains a (sorted) list of key-offset pairs. Where the directories-as-index approach falls down is when you want to search on key different from the one you used to create the filenames - if you've used an index file, then you can just create a second index for this situation.

You may want to reconsider the "we can't use a database" restriction, since you are effectively just building your own database anyway.

like image 156
caf Avatar answered Sep 20 '22 15:09

caf