I found out that if you sort a list of files by file extension rather than alphabetically before putting them in a tar archive, you can dramatically increase the compression ratio (especially for large source trees where you likely have lots of .c, .o, and .h files).
I couldn't find an easy way to sort files using the shell that works in every case the way I'd expect. An easy solution such as find | rev | sort | rev
does the job but the files appear in an odd order, and it doesn't arrange them as nicely for the best compression ratio. Other tools such as ls -X
don't work with find
, and sort -t. -k 2,2 -k 1,1
messes up when files have more than one period in the filename (e.g. version-1.5.tar). Another quick-n-dirty option, using sed
replaces the last period with a /
(which never occurs in a filename), then sorts, splitting along the /
:
sed 's/\(\.[^.]*\)$/\/\1/' | sort -t/ -k 2,2 -k 1,1 | sed 's/\/\([^/]*\)$/\1/'
However, once again this doesn't work using the output from find
which has /
s in the names, and all other characters (other than 0) are allowed in filenames in *nix.
I discovered that using Perl, you can write a custom comparison subroutine using the same output as cmp
(similar to strcmp
in C), and then run the perl sort function, passing your own custom comparison, which was easy to write with perl regular expressions. This is exactly what I did: I now have a perl script which calls
@lines = <STDIN>;
print sort myComparisonFunction @lines;
However, perl is not as portable as bash, so I want to be able to do with with a shell script. In addition, find
does not put a trailing / on directory names so the script thinks directories are the same as files without an extension. Ideally, I'd like to have tar
read all the directories first, then regular files (and sort them), then symbolic links which I can achieve via
cat <(find -type d) <(find -type f | perl exsort.pl) <(find -not -type d -and -not -type f) | tar --no-recursion -T - -cvf myfile.tar
but I still run into the issue that either I have to type this monstrosity every time, or I have both a shell script for this long line AND a perl script for sorting, and perl isn't available everywhere so stuffing everything into one perl script isn't a great solution either. (I'm mainly focused on older computers, cause nowadays all modern Linux and OSX come with a recent enough version of perl).
I'd like to be able to put everything together into one shell script, but I don't know how to pass a custom function to GNU sort tool. Am I out of luck, and have to use one perl script? Or can I do this with one shell script?
EDIT: Thanks for the idea of a Schwartizan Transform. I used a slightly different method, using sed
. My final sorting routine is as follows:
sed 's_^\(\([^/]*/\)*\)\(.*\)\(\.[^\./]*\)$_\4/\3/\1_' | sed 's_^\(\([^/]*/\)*\)\([^\./]\+\)$_/\3/\1_' | sort -t/ -k1,1 -k2,2 -k3,3 | sed 's_^\([^/]*\)/\([^/]*\)/\(.*\)$_\3\2\1_'
This handles special characters (such as *) in filenames and places files without an extension first because they are often text files. (Makefile, COPYING, README, configure, etc.).
P.S. In case anyone wants my original comparison function or think I could improve on it, here it is:
sub comparison {
my $first = $a;
my $second = $b;
my $fdir = $first =~ s/^(([^\/]*\/)*)([^\/]*)$/$1/r;
my $sdir = $second =~ s/^(([^\/]*\/)*)([^\/]*)$/$1/r;
my $fname = $first =~ s/^([^\/]*\/)*([^\/]*)$/$2/r;
my $sname = $second =~ s/^([^\/]*\/)*([^\/]*)$/$2/r;
my $fbase = $fname =~ s/^(([^\.]*\.)*)([^\.]*)$/$1/r;
my $sbase = $sname =~ s/^(([^\.]*\.)*)([^\.]*)$/$1/r;
my $fext = $fname =~ s/^([^\.]*\.)*([^\.]*)$/$2/r;
my $sext = $sname =~ s/^([^\.]*\.)*([^\.]*)$/$2/r;
if ($fbase eq "" && $sbase ne ""){
return -1;
}
if ($sbase eq "" && $fbase ne ""){
return 1;
}
(($fext cmp $sext) or ($fbase cmp $sbase)) or ($fdir cmp $sdir)
}
If you're familiar with Perl, you can use a Schwartzian Tranform in BASH too.
A Schwartian Transform is merely adding to your sorting information the sort key you desire, do the sort, then remove the sort key. It was created by Randal Schwartz and is used heavily in Perl. However, it's also good to use in other languages too:
You want to sort your files by extension:
find . -type f 2> /dev/null | while read file #Assuming no strange characters or white space
do
suffix=${file##*.}
printf "%-10.10s %s\n" "$suffix" "$file"
done | sort | awk '{print substr( $0, 8 ) }' > files_to_tar.txt
I'm reading each file in with my find
. I use printf
to prepend my file name with the suffix I want to sort by. Then, I do my sort. My awk
strips my sort key off leaving just my file name which are still sorted by suffix.
Now, your files_to_tar.txt
file contains the names of your files sorted by suffix. You can use the -T
parameter of tar
to read the names of the files from this file:
$ tar -czvf backup.tar.gz -T files_to_tar.txt
you could pipe result of find
to ls -X
, using xargs
, (read man page here) which should sort them by extension,
cat <(find -type d) <(find -type f | xargs ls -X ) <(find -not -type d -and -not -type f) | tar --no-recursion -T - -cvf myfile.tar
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With