It's easy to both create and delete blob data. There are ways to protect from accidental data loss, ex:
This is already a good package, but it feels like there's a weak link. AFAIK, blob container lacks such safety as for account/blobs.
Considering that containers are a good unit to work with for blob enumeration and batch-deleting, that's bad.
How to protect against accidental/malicious container deletion and mitigate the risk of data loss?
Idea 1: Sync copy of all data to another storage account - but this brings the synchronization complexity (incremental copy?) and notable cost increase.
Idea 2: Lock up the keys and force everyone to work on carefully scoped SAS keys, but that's a lot of hassle with dozens of SAS tokens and their renewals, + sometimes container deletion actually is required and authorized. Feels complex enough to break. I'd prefer a safety anyway.
Idea 3: Undo deletion somehow? According to Delete Container documentation, the container data is not gone immediately:
The Delete Container operation marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection.
Though, there is no information on when/how the storage account garbage collection works or if/how/for how long could the container data be recovered.
Any better options I've missed?
UPDATE:
This is similar to Blob-level protection and allows recovery from accidental deletion. Original answer below is still relevant as additional measures to take.
There is no single magic bullet .. Recap of what can be done:
Use Managed Service Identity with RBAC when possible -or- delegate access with limited permissions using SAS (and Access Policies). This reduces the actors and scenarios where accidental/malicious deletion could happen in the first place.
Leases do not prevent malicious deletion but declares the "do not delete" intent more clearly and required extra step of removing the lease acts like additional layer of "Are you sure?"-question.
AFAIK, no built-in recovery tools exists when entire container is already deleted.
Like with all backup solutions, do backup to locations of different security contexts and/or offline to avoid losing backups as well in the same incident. A few blob container backup implementation tips:
If you have no backup to restore from, then the container may still be recoverable by MS (if you are lucky and fast enough). According to Delete Container documentation the container data is not gone immediately:
The Delete Container operation marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection.
There is an alternative option you should consider using the Access policies offered for containers. You can use SAS for access and add an additional layer using Access policies which provides you with Container level policies. In there you can provide Access that does not include the delete option:
This more for the preventive side
Rbac would also be a good way to secure access to containers.
When it comes to recovering from dataloss these are the official suggestions:
Block blobs. Create a point-in-time snapshot of each block blob. For more information, see Creating a Snapshot of a Blob. For each snapshot, you are only charged for the storage required to store the differences within the blob since the last snapshot state. The snapshots are dependent on the existence of the original blob they are based on, so a copy operation to another blob or even another storage account is advisable. This ensures that backup data is properly protected against accidental deletion. You can use AzCopy or Azure PowerShell to copy the blobs to another storage account.
Files. Use share snapshots, or use AzCopy or PowerShell to copy your files to another storage account.
Tables. Use AzCopy to export the table data into another storage account in another region. More can be found here
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With