Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How efficient iOS file system in dealing with large number of files in single folder

If I have large number of files (n x 100K individual files) what would be most efficient way to store them in iOS file system (from speed of access to the file by path point of view)? Should I dump them all in single folder or break them in multilevel folder hierarchy.

Basically this breaks in three questions:

  1. does file access time depend on number of "sibling" files (I think answer is yes. If I am correct file names are organized into b-tree so it should be O(log n))?
  2. how expensive is traversing from one folder to another along the path (is it something like m * O( log nm ) - where m is number of components in the path and nm is number of "siblings" at each path component )?
  3. What gets cached at file system level to make above assumptions incorrect?

It would be great if some one had direct experience with this kind of problem and can share some real life results.

You comments will be highly appreciated

like image 880
Andrei Tchijov Avatar asked Jan 09 '12 14:01

Andrei Tchijov


1 Answers

This seems like it might provide relevant, hard data:

File System vs Core Data: the image cache test

http://biasedbit.com/blog/filesystem-vs-coredata-image-cache

Conclusion:

File system cache is, as expected, faster. Core Data falls shortly behind when storing (marginally slower) but load times are way higher when performing single random accesses.

For such a simple case Core Data functionality really doesn't pay up, so stick to the file system version.

like image 114
jamie Avatar answered Oct 12 '22 16:10

jamie