I am looking to gzip multiple files (into multiple .gz files) in a directory while keeping the originals.
I can do individual files using these commands:
find . -type f -name "*cache.html" -exec gzip {} \;
or
gzip *cache.html
but neither preserves the original. I tried
find . -type f -name "*cache.html" -exec gzip -c {} > {}.gz
but that only made a {}.gz file. Is there a simple way to do this?
The gzip program compresses and decompresses files on Unix like system. You need to pass the -c or --stdout , or --to-stdout option to the gzip command. This option specifies that output will go to the standard output stream, leaving original files intact.
Gzip compresses only single files and creates a compressed file for each given file.
If given a file as an argument, gzip compresses the file, adds a ". gz" suffix, and deletes the original file. With no arguments, gzip compresses the standard input and writes the compressed file to standard output.
Split Gzip FileYou can split the file according to the size of the required split files (option –b) or according to the number of lines (-l). For example, you can split the file into 512 MB files by using the following command. This will create 512MB files named file. gz.
I'd use bash(1)
's simple for
construct for this:
for f in *cache.html ; do gzip -c "$f" > "$f.gz" ; done
If I knew the filenames were 'sane', I'd leave off the ""
around the arguments, because I'm lazy. And my filenames are usually sane. But scripts don't have that luxury.
-k, --keep
gzip 1.6 (June 2013) added the -k, --keep
option, so now you can:
find . -type f -name "*cache.html" -exec gzip -k {} \;
gzip -k *cache.html
or for all files recursively simply:
gzip -kr .
Found at: https://unix.stackexchange.com/questions/46786/how-to-tell-gzip-to-keep-original-file
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With