Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to make file sparse?

If I have a big file containing many zeros, how can i efficiently make it a sparse file?

Is the only possibility to read the whole file (including all zeroes, which may patrially be stored sparse) and to rewrite it to a new file using seek to skip the zero areas?

Or is there a possibility to make this in an existing file (e.g. File.setSparse(long start, long end))?

I'm looking for a solution in Java or some Linux commands, Filesystem will be ext3 or similar.

like image 501
rurouni Avatar asked May 13 '11 08:05

rurouni


2 Answers

A lot's changed in 8 years.

Fallocate

fallocate -d filename can be used to punch holes in existing files. From the fallocate(1) man page:

       -d, --dig-holes
              Detect and dig holes.  This makes the file sparse in-place,
              without using extra disk space.  The minimum size of the hole
              depends on filesystem I/O block size (usually 4096 bytes).
              Also, when using this option, --keep-size is implied.  If no
              range is specified by --offset and --length, then the entire
              file is analyzed for holes.

              You can think of this option as doing a "cp --sparse" and then
              renaming the destination file to the original, without the
              need for extra disk space.

              See --punch-hole for a list of supported filesystems.

(That list:)

              Supported for XFS (since Linux 2.6.38), ext4 (since Linux
              3.0), Btrfs (since Linux 3.7) and tmpfs (since Linux 3.5).

tmpfs being on that list is the one I find most interesting. The filesystem itself is efficient enough to only consume as much RAM as it needs to store its contents, but making the contents sparse can potentially increase that efficiency even further.

GNU cp

Additionally, somewhere along the way GNU cp gained an understanding of sparse files. Quoting the cp(1) man page regarding its default mode, --sparse=auto:

sparse SOURCE files are detected by a crude heuristic and the corresponding DEST file is made sparse as well.

But there's also --sparse=always, which activates the file-copy equivalent of what fallocate -d does in-place:

Specify --sparse=always to create a sparse DEST file whenever the SOURCE file contains a long enough sequence of zero bytes.

I've finally been able to retire my tar cpSf - SOURCE | (cd DESTDIR && tar xpSf -) one-liner, which for 20 years was my graybeard way of copying sparse files with their sparseness preserved.

like image 72
FeRD Avatar answered Sep 28 '22 07:09

FeRD


Some filesystems on Linux / UNIX have the ability to "punch holes" into an existing file. See:

  • LKML posting about the feature
  • UNIX file trunctation FAQ (search for F_FREESP)

It's not very portable and not done the same way across the board; as of right now, I believe Java's IO libraries do not provide an interface for this.

If hole punching is available either via fcntl(F_FREESP) or via any other mechanism, it should be significantly faster than a copy/seek loop.

like image 25
FrankH. Avatar answered Sep 28 '22 08:09

FrankH.