Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Amazon CloudFront Latency

I am experimenting with AWS S3 and CloudFront for a web application that I am developing.

In the app I'm letting users upload files to the S3 bucket (using the AWS SDK) and make it available via CloudFront CDN, but the issue is even when the files are uploaded and ready in the S3 bucket it takes about a minute or 2 to be available in the CloudFront CDN url, is this normal?

like image 886
Ahsan Avatar asked Feb 21 '16 09:02

Ahsan


People also ask

Does CloudFront reduce latency?

One of the purposes of using CloudFront is to reduce the number of requests that your origin server must respond to directly. With CloudFront caching, more objects are served from CloudFront edge locations, which are closer to your users. This reduces the load on your origin server and reduces latency.

What is origin latency in CloudFront?

Origin Latency: The total time spent in milliseconds from when CloudFront receives a request to when it provides a response to the network (not the viewer), for requests that are served from the origin, not the CloudFront cache. Origin Latency allows you to monitor the performance of your origin server.

Why does CloudFront take so long?

This is because CloudFront delivers content through a worldwide network of low latency and high performance edge locations. It can take additional time depending on how long it takes to propagate changes in configurations such as certificates, origins, settings and more.

Is CloudFront fast?

Fast, highly secure and programmable content delivery network (CDN) Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.


2 Answers

CloudFront attempts to fetch uncached content from the origin server in real time. There is no "replication delay" or similar issue because CloudFront is a pull-through CDN. Each CloudFront edge location knows only about your site's existence and configuration; it doesn't know about your content until it receives requests for it. When that happens, the CloudFront edge fetches the requested content from the origin server, and caches it as appropriate, for serving subsequent requests.

The issue that's occurring here is related to a concept sometimes called "negative caching" -- caching the fact that a request won't work -- which is typically done to avoid hammering the origin of whatever's being cached with requests that are likely to fail anyway.

By default, when your origin returns an HTTP 4xx or 5xx status code, CloudFront caches these error responses for five minutes and then submits the next request for the object to your origin to see whether the problem that caused the error has been resolved and the requested object is now available.

— http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html

If the browser, or anything else, tries to download the file from that particular CloudFront edge before the upload into S3 is complete, S3 will return an error, and CloudFront -- at that edge location -- will cache that error and remember, for the next 5 minutes, not to bother trying again.

Not to worry, though -- this timer is configurable, so if the browser is doing this under the hood and outside your control, you should still be able to fix it.

You can specify the error-caching duration—the Error Caching Minimum TTL—for each 4xx and 5xx status code that CloudFront caches. For a procedure, see Configuring Error Response Behavior.

— http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/custom-error-pages.html


To configure this in the console:

  • When viewing the distribution configuration, click the Error Pages tab.

  • For each error where you want to customize the timing, begin by clicking Create Custom Error Response.

  • Choose the error code you want to modify from the drop-down list, such as 403 (Forbidden) or 404 (Not Found) -- your bucket configuration determines which code S3 returns for missing objects, so if you aren't sure, change 403 then repeat the process and change 404.

  • Set Error Caching Minimum TTL (seconds) to 0

  • Leave Customize Error Response set to No (If set to Yes, this option enables custom response content on errors, which is not what you want. Activating this option is outside the scope of this question.)

  • Click Create. This takes you back to the previous view, where you'll see Error Caching Minimum TTL for the code you just defined.

Repeat these steps for each HTTP response code you want to change from the default behavior (which is the 300 second hold time, discussed above).

When you've made all the changes you want, return to the main CloudFront console screen where the distributions are listed. Wait for the distribution state to change from In Progress to Deployed (formerly, this took quite some time but now requires typically about 5 minutes for the changes to be pushed out to all the edges) and test.

like image 161
Michael - sqlbot Avatar answered Oct 12 '22 01:10

Michael - sqlbot


Are these new files being written to S3 for the first time, or are they updates to existing files? S3 provides read-after-write consistency for new objects, and given CloudFront's pull model you should not be having this issue with new files written to S3. If you are, then I would open a ticket with AWS.

If these are updates to existing files, then you have both S3 eventual consistency and CloudFront cache expiration to deal with. Both of which could cause this sort of behavior.

like image 35
Mark B Avatar answered Oct 12 '22 02:10

Mark B