I'm using Google Cloud Storage to store and retrieve some files, and my problem is that the response times I'm getting are inconsistent, and sometimes very slow.
My application is an ASP.NET Core app running in the Google Container Engine. The Container Engine cluster is in europe-west1-c
. The Cloud Storage bucket is Multi-Regional, in the location EU
, and it's a secure bucket (not publicly accessible). I'm using the latest version of the official Google.Cloud.Storage.V1
SDK package to access the Cloud Storage. (I tried both 1.0.0
and the new 2.0.0-beta01
.) I'm using a singleton instance of the StorageClient
object, which should do connection pooling under the hood.
I'm measuring and logging the time it takes to download a file from the Cloud Storage, this is the measurement I do.
var sw = Stopwatch.CreateNew();
await client.DownloadObjectAsync(googleCloudOptions.StorageBucketName, filepath, ms);
sw.Stop();
So I'm directly measuring the SDK call without any of my own application logic.
The numbers I'm getting for this measurement look like this in an average period.
44ms
56ms
501ms
274ms
90ms
237ms
145ms
979ms
446ms
148ms
You can see that the variance is already pretty large to begin with (and the response time is often really sluggish).
But occasionally I even get response times like this (the slowest I've seen was over 10 seconds).
172ms
4,348ms
72ms
51ms
179ms
2,508ms
2,592ms
100ms
Which is really bad considering that the file I'm downloading is ~2 KB in size, and my application is doing less than 1 requests per second, and I'm running my application inside the Google Cloud. I don't think that the bucket not being warmed up can be a problem, since I'm mainly downloading the same handful of files, and I'm doing at least a couple of requests every minute.
Does anyone know what can be the reason for this slowness, or how I could investigate what's going wrong?
Update: Following @jterrace's suggestion, I've run gsutil perfdiag
on the production environment, and uploaded both the terminal output and the generated json report here.
I also collected some more measurements, here you can see the statistics for the last 7 days.
So you can see that slow requests don't happen super-often, but over half a second response time is not rare, and we even have a handful of requests over 5 seconds every day.
What I'd like to figure out is whether we're doing something wrong, or this is expected with Cloud Storage and we have to be prepared to be able to handle these slow responses on our side.
Prioritize primary content The Google Cloud page loads a large initial JavaScript bundle. The longer it takes to load and initialize this code, the longer it takes to load page-specific code and to render the list of Cloud Functions the user wants to see.
For a single large file, use Composite transfers. This method breaks large files into smaller chunks to increase transfer speed. Chunks are transferred and validated in parallel, sending all data to Google.
We have the same issue with GCS. The only answer we got (from GCS support) is to use exponential backoff. First request should be with 200ms timeout, next try 400ms and so on.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With