Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How do I restore from AWS Glacier back to S3 permanently?

I have about 50Gb worth of files that was stored in S3. Yesterday I stupidly added a lifecycle rule to transfer files that were more than 30 days old from S3 to Glacier not realising that this will disable the public link to the original file.

I actually really need these files to stay in S3 as they are images and drawings that are linked on our website.

I've have requested a restore of the files from Glacier, however as far as I understand this has limits for the number of days that the files will be available for before they go back to Glacier.

I was thinking that I am going to have to create a new bucket, then copy the files across to it and then link that new bucket up to my website.

My questions:

  1. I was wondering if there is a way to do this without having to copy my files to a new bucket?

  2. If I just change the storage class of the file once it is back in S3 will this stop it going back to Glacier?

  3. If I have to copy the files to a new bucket I'm assuming that these copies won't randomly go back to Glacier?

I'm quite new to S3 (as you can probably tell by my bone-headed mistake) so please try to be gentle

like image 792
Pete Dermott Avatar asked Aug 03 '18 10:08

Pete Dermott


People also ask

How do I restore my Glacier backup?

Open the “Backup Storage” tab. Select your Amazon Glacier account from a drop-down list. Right-click on a folder or a file you want to restore. Click on the “Restore” to begin the restoration process.

Can S3 bucket be restored?

You can restore the S3 data that you backed up using AWS Backup to the S3 Standard storage class. You can restore all the objects in a bucket or specific objects. You can restore them to an existing or new bucket.

How do you restore multiple objects from a Glacier?

For Operation, select Restore. For Restore source, select Glacier or Glacier Deep Archive. For Number of days that the restored copy is available, enter the number of days for your use case. For Restore tier, select either Bulk retrieval or Standard retrieval.

What are the three data retrieval options for Amazon Glacier?

S3 Glacier Flexible Retrieval provides three retrieval options: expedited retrievals that typically complete in 1–5 minutes, standard retrievals that typically complete in 3–5 hours, and free bulk retrievals that return large amounts of data typically in 5–12 hours.


3 Answers

You don't need a new bucket. You restore the objects from glacier (temporarily) and then overwrite them using the COPY operation, which essentially creates new objects and they'll stay around. Needless to say, you'll need to disable your aging-away-to-glacier lifecycle.

Temporary restore:

aws s3api restore-object --restore-request Days=7 --bucket <bucketName> --key <keyName>

Replace with copied object:

aws s3 cp s3://bucketName/keyName s3://bucketName/keyName --force-glacier-transfer --storage-class STANDARD

Docs say:

The transition of objects to the GLACIER storage class is one-way.

You cannot use a lifecycle configuration rule to convert the storage class of an object from GLACIER to STANDARD or REDUCED_REDUNDANCY storage classes. If you want to change the storage class of an archived object to either STANDARD or REDUCED_REDUNDANCY, you must use the restore operation to make a temporary copy first. Then use the copy operation to overwrite the object as a STANDARD, STANDARD_IA, ONEZONE_IA, or REDUCED_REDUNDANCY object.

Ref.

...going back to Glacier

Being pedantic for a moment, the archived objects aren't moving between s3 and glacier, they're permanently in glacier and temporary copies are made in S3 - It's important to note that you're paying for both glacier and s3 when you temporarily restore them. Once your retention period expires, the S3 copies are deleted.

like image 150
RaGe Avatar answered Oct 30 '22 06:10

RaGe


To provide a complete answer I've combined two other SO posts:

Step one temporarily restore everything:

  1. Get a listing of all GLACIER files (keys) in the bucket (you can skip this step if you are sure all files are in Glacier).

    aws s3api list-objects-v2 --bucket <bucketName> --query "Contents[?StorageClass=='GLACIER']" --output text | awk -F '\t' '{print $2}' > glacier-restore.txt

  2. Create a shell script and run it, replacing your "bucketName".

    #!/bin/sh
    
    IFS=$'\n'
    for x in `cat glacier-restore.txt`
      do
        echo "Begin restoring ${x}"
        aws s3api restore-object --restore-request Days=7 --bucket <bucketName> --key "${x}"
        echo "Done restoring ${x}"
      done
    

Credit Josh & @domenic-d.

Step two for permanent restore:

aws s3 cp s3://mybucket s3://mybucket --force-glacier-transfer --storage-class

done and done.

Credit to @pete-dermott's comment here.

like image 28
David Avatar answered Oct 30 '22 06:10

David


I used the following command to restore S3 object from the Amazon Glacier storage class :

aws s3api restore-object --bucket bucket_name --key dir1/sample.obj --restore-request '{"Days":25,"GlacierJobParameters":{"Tier":"Standard"}}'

Here a temporary copy of the object is made available for the duration specified in the restore request, such as the 25 days used in the above command.

If the JSON syntax used in the example results in an error on a Windows client, replace the restore request with the following syntax:

--restore-request Days=25,GlacierJobParameters={"Tier"="Standard"}

Note: This will only create a temporary copy of the object for the specified duration.You have to make use of the copy operation to overwrite the object as a Standard object.

To change the object's storage class to Amazon S3 Standard use the following command:

aws s3 cp s3://bucket_name/dir1 s3://bucket_name/dir1 --storage-class STANDARD --recursive --force-glacier-transfer

This will recursively copy and overwrite existing objects with the Amazon S3 Standard storage class.

like image 39
snehab Avatar answered Oct 30 '22 07:10

snehab