Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How many files can a windows server 2008 r2 directory safely hold?

How many files can a windows server 2008 r2 directory safely hold?

I'm thinking in terms of a website that has image galleries. Say there is one directory that holds all the thumbnails and a different directory that holds the full size images. How many pairs of images can be safely stored?

Or, if there isn't a good cut-and-dry answer, should I just try it with 30,000 images?

like image 811
quakkels Avatar asked Oct 22 '10 20:10

quakkels


People also ask

Is there a limit to number of files in Windows folder?

Maximum number of files on disk: 4,294,967,295. Maximum number of files in a single folder: 4,294,967,295.

How many files in a directory is too many?

It is entirely based on context, activity, and your definition of "too". The answer is likely between 100 and 10 million.

What is the minimum amount of memory that Windows Server 2008 R2 requires?

RAM — The following are the RAM requirements: Minimum: 512 MB. Recommended: 2 GB or more. Maximum (32-bit systems): 4 GB (for Windows Server 2008 Standard) or 64 GB (for Windows Server 2008 Enterprise or Windows Server 2008 Datacenter)

How many folders can be stored within a folder?

It suggests that there is no limit to the number of files in a given folder as long as the number of files on any given volume is not greater than 4,294,967,295 (on NTFS) the link gives much lower limits for FAT32.


1 Answers

If your server is using NTFS for its volume file system, you aren't limited to any number of files per directory per se, but more in that you are limited to some number of files/directories per volume.

For NTFS, size limitations are:

NTFS Size Limits

Files per volume 4,294,967,295 (2^32 minus 1 file)

Of course, that says nothing about performance, and there are other considerations that can come into play. With 30000, you shouldn't worry. When you get into the millions, you might have to start restructuring.

edit to address scaling/performance

Technically speaking, the NTFS file system uses a global MFT that keeps track of all the files (directories are files and are mostly used for logical representation to the end user) so every time you modify the volume, that change is reflected in the MFT.

When you start having a single directory with large numbers of files, one of the recommended procedures is to disable automatic 8.3 name generation. From the technet article I linked above:

Every time you create a file with a long file name, NTFS creates a second file entry that has a similar 8.3 short file name. A file with an 8.3 short file name has a file name containing 1 to 8 characters and a file name extension containing 1 to 3 characters. The file name and file name extension are separated by a period.

So if you are constantly modifying a single directory with a large amount of files, the system has to generate a short name for it - this can lead to performance degradation if you are constantly modifying the contents of a single directory. Since you are storing images, it could be very likely a lot of the files have similar file names at the beginning, like imageblahblahblah.

For file look-up performance, even for large directories NTFS should be reasonably fast because of the underlying B-Tree implementation.

Also check out this thread: NTFS performance and large volumes of files and directories

like image 178
逆さま Avatar answered Sep 23 '22 06:09

逆さま