Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Storing locally encrypted incremental ZFS snapshots in Amazon Glacier

To have truly off-site and durable backups of my ZFS pool, I would like to store zfs snapshots in Amazon Glacier. The data would need to be encrypted locally, independently from Amazon, to ensure privacy. How could I accomplish this?

like image 338
TinkerTank Avatar asked Aug 20 '17 19:08

TinkerTank


People also ask

Where is ZFS snapshot stored?

Snapshots of file systems are accessible in the . zfs/snapshot directory within the root of the file system. For example, if tank/home/ahrens is mounted on /home/ahrens , then the tank/home/ahrens@thursday snapshot data is accessible in the /home/ahrens/. zfs/snapshot/thursday directory.

Are ZFS snapshots incremental?

Use the zfs send -I option to send all incremental streams from one snapshot to a cumulative snapshot. Or, use this option to send an incremental stream from the original snapshot to create a clone. The original snapshot must already exist on the receiving side to accept the incremental stream.

Is a ZFS snapshot a backup?

Since these snapshots contain all blocks for the data in question and since they are read only and immutable, they are IMO a backup.


1 Answers

An existing snapshot can be sent to a S3 bucket as following:

zfs send -R <pool name>@<snapshot name> | gzip | gpg --no-use-agent  --no-tty --passphrase-file ./passphrase -c - | aws s3 cp - s3://<bucketname>/<filename>.zfs.gz.gpg

or for incremental back-ups:

zfs send -R -I <pool name>@<snapshot to do incremental backup from> <pool name>@<snapshot name> | gzip | gpg --no-use-agent  --no-tty --passphrase-file ./passphrase -c - | aws s3 cp - s3://<bucketname>/<filename>.zfs.gz.gpg

This command will take an existing snapshot, serialize it with zfs send, compress it, and encrypt it with a passphrase with gpg. The passphrase must be readable on the first line in the ./passphrase file.

Remember to back-up your passphrase-file separately in multiple locations! - If you lose access to it, you'll never be able to get to your data again!

This requires:

  • A pre-created Amazon s3 bucket
  • awscli installed (pip install awscli) and configured (aws configure).
  • gpg installed

Lastly, S3 lifecycle rules can be used to transition the S3 object to glacier after a pre-set amount of time (or immediately).


For restoring:

aws s3 cp s3://<bucketname>/<filename>.zfs.gz.gpg - | gpg --no-use-agent --passphrase-file ./passphrase -d - | gunzip | sudo zfs receive <new dataset name> 
like image 197
TinkerTank Avatar answered Sep 20 '22 19:09

TinkerTank