We are currently utilizing the new coldline storage to backup files off site, the storage part is super cost effective. We are using gsutil rsync once a day to make sure our coldline storage is up to date.
The problem is that using gsutil rsync creates a massive number of class A requests, which are quite expensive. In this case it would be at least 5x the amount of the coldline storage making it no longer a good deal.
Are we going to have to custom code a custom solution to avoid these charges, is there a better option for this type of back, or is there some way get rsync to not generate so many requests?
I think there is one pricing trick you can use if the expense is from the storage.objects.list
operation. From the GCS operations pricing page: "When an operation applies to a bucket, such as listing the objects in a bucket, the default storage class set for that bucket determines the operation cost. So the trick is:
gsutil cp -s ARCHIVE
, or by setting the appropriate option on the upload requesting using the API).As far as I understand: this now means you will be charged at the STANDARD rate for listing the bucket ($0.05/10k operations instead of $0.50/10k operations).
The one challenge is that gsutil rsync
does not support the -s ARCHIVE
flag that is supported by gsutil cp
, so you won't be able to use it. You might want to look at other tools, like possibly rclone.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With