Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to protect against deletion of a blob container?

It's easy to both create and delete blob data. There are ways to protect from accidental data loss, ex:

  • Resource locks to protect against accidental storage account deletion
  • Azure RBAC to limit access to account/keys.
  • Soft delete to recover from accidental blob deletion.

This is already a good package, but it feels like there's a weak link. AFAIK, blob container lacks such safety as for account/blobs.

Considering that containers are a good unit to work with for blob enumeration and batch-deleting, that's bad.

How to protect against accidental/malicious container deletion and mitigate the risk of data loss?

What I've considered..

Idea 1: Sync copy of all data to another storage account - but this brings the synchronization complexity (incremental copy?) and notable cost increase.

Idea 2: Lock up the keys and force everyone to work on carefully scoped SAS keys, but that's a lot of hassle with dozens of SAS tokens and their renewals, + sometimes container deletion actually is required and authorized. Feels complex enough to break. I'd prefer a safety anyway.

Idea 3: Undo deletion somehow? According to Delete Container documentation, the container data is not gone immediately:

The Delete Container operation marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection.

Though, there is no information on when/how the storage account garbage collection works or if/how/for how long could the container data be recovered.

Any better options I've missed?

like image 393
Imre Pühvel Avatar asked Oct 29 '22 05:10

Imre Pühvel


2 Answers

UPDATE:

  • DO enable soft-delete protection for containers

This is similar to Blob-level protection and allows recovery from accidental deletion. Original answer below is still relevant as additional measures to take.


There is no single magic bullet .. Recap of what can be done:

Prevention measures

  • DO apply storage account level protections (resource locks).
  • DO limit account/container delete access to only callers who actually need it.
  • DO mark containers as Leased for infinity.

Use Managed Service Identity with RBAC when possible -or- delegate access with limited permissions using SAS (and Access Policies). This reduces the actors and scenarios where accidental/malicious deletion could happen in the first place.

Leases do not prevent malicious deletion but declares the "do not delete" intent more clearly and required extra step of removing the lease acts like additional layer of "Are you sure?"-question.

Recovery measures

AFAIK, no built-in recovery tools exists when entire container is already deleted.

  • DO implement periodic backup solution for disaster recovery.
  • CONSIDER contacting Azure support immediately, if you have no own backup.

Like with all backup solutions, do backup to locations of different security contexts and/or offline to avoid losing backups as well in the same incident. A few blob container backup implementation tips:

  • Azure Data Factory Copy activity can be configured copy only new files.
  • Azcopy sync can also do incremental copying.

If you have no backup to restore from, then the container may still be recoverable by MS (if you are lucky and fast enough). According to Delete Container documentation the container data is not gone immediately:

The Delete Container operation marks the specified container for deletion. The container and any blobs contained within it are later deleted during garbage collection.

like image 193
Imre Pühvel Avatar answered Nov 09 '22 10:11

Imre Pühvel


There is an alternative option you should consider using the Access policies offered for containers. You can use SAS for access and add an additional layer using Access policies which provides you with Container level policies. In there you can provide Access that does not include the delete option:

enter image description here This more for the preventive side

Rbac would also be a good way to secure access to containers.

When it comes to recovering from dataloss these are the official suggestions:

Block blobs. Create a point-in-time snapshot of each block blob. For more information, see Creating a Snapshot of a Blob. For each snapshot, you are only charged for the storage required to store the differences within the blob since the last snapshot state. The snapshots are dependent on the existence of the original blob they are based on, so a copy operation to another blob or even another storage account is advisable. This ensures that backup data is properly protected against accidental deletion. You can use AzCopy or Azure PowerShell to copy the blobs to another storage account.

Files. Use share snapshots, or use AzCopy or PowerShell to copy your files to another storage account.

Tables. Use AzCopy to export the table data into another storage account in another region. More can be found here

like image 27
Adam Smith - Microsoft Azure Avatar answered Nov 09 '22 12:11

Adam Smith - Microsoft Azure