POST requests are not cacheable by default but can be made cacheable if either an Expires header or a Cache-Control header with a directive, to explicitly allows caching, is added to the response. Responses to PUT and DELETE requests are not cacheable at all.
By enabling the caching of GET requests, you can improve the response times of requests for resource data that were previously submitted by the same user. When caching is enabled, the data is retrieved from the browser cache instead of from the business object on the server.
The corresponding RFC 2616 in section 9.5 (POST) allows the caching of the response to a POST message, if you use the appropriate headers.
Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource.
Note that the same RFC states explicitly in section 13 (Caching in HTTP) that a cache must invalidate the corresponding entity after a POST request.
Some HTTP methods MUST cause a cache to invalidate an entity. This is either the entity referred to by the Request-URI, or by the Location or Content-Location headers (if present). These methods are:
- PUT - DELETE - POST
It's not clear to me how these specifications can allow meaningful caching.
This is also reflected and further clarified in RFC 7231 (Section 4.3.3.), which obsoletes RFC 2616.
Responses to POST requests are only cacheable when they include
explicit freshness information (see Section 4.2.1 of [RFC7234]).
However, POST caching is not widely implemented. For cases where an origin server wishes the client to be able to cache the result of a POST in a way that can be reused by a later GET, the origin server MAY send a 200 (OK) response containing the result and a Content-Location header field that has the same value as the POST's effective request URI (Section 3.1.4.2).
According to this, the result of a cached POST (if this ability is indicated by the server) can be subsequently used for as the result of a GET request for the same URI.
According to RFC 2616 Section 9.5:
"Responses to POST method are not cacheable, UNLESS the response includes appropriate Cache-Control or Expires header fields."
So, YES, you can cache POST request response but only if it arrives with appropriate headers. In most cases you don't want to cache the response. But in some cases - such as if you are not saving any data on the server - it's entirely appropriate.
Note, however many browsers, including current Firefox 3.0.10, will not cache POST response regardless of the headers. IE behaves more smartly in this respect.
Now, i want to clear up some confusion here regarding RFC 2616 S. 13.10. POST method on a URI doesn't "invalidate the resource for caching" as some have stated here. It makes a previously cached version of that URI stale, even if its cache control headers indicated freshness of longer duration.
Overall:
Basically POST is not an idempotent operation. So you cannot use it for caching. GET should be an idempotent operation, so it is commonly used for caching.
Please see section 9.1 of the HTTP 1.1 RFC 2616 S. 9.1.
Other than GET method's semantics:
The POST method itself is semantically meant to post something to a resource. POST cannot be cached because if you do something once vs twice vs three times, then you are altering the server's resource each time. Each request matters and should be delivered to the server.
The PUT method itself is semantically meant to put or create a resource. It is an idempotent operation, but it won't be used for caching because a DELETE could have occurred in the meantime.
The DELETE method itself is semantically meant to delete a resource. It is an idempotent operation, but it won't be used for caching because a PUT could have occurred in the meantime.
Regarding client side caching:
A web browser will always forward your request even if it has a response from a previous POST operation. For example you may send emails with gmail a couple days apart. They may be the same subject and body, but both emails should be sent.
Regarding proxy caching:
A proxy HTTP server that forwards your message to the server would never cache anything but a GET or a HEAD request.
Regarding server caching:
A server by default wouldn't automatically handle a POST request via checking its cache. But of course a POST request can be sent to your application or add-in and you can have your own cache that you read from when the parameters are the same.
Invalidating a resource:
Checking the HTTP 1.1 RFC 2616 S. 13.10 shows that the POST method should invalidate the resource for caching.
If you're wondering whether you can cache a post request, and try researching an answer to that question, you likely won't succeed. When searching "cache post request" the first result is this StackOverflow question.
The answers are a confused mixture of how caching should work, how caching works according to the RFC, how caching should work according to the RFC, and how caching works in practice. Let's start with the RFC, walk through how browser's actually work, then talk about CDNs, GraphQL, and other areas of concern.
Per the RFC, POST requests must invalidate the cache:
13.10 Invalidation After Updates or Deletions
..
Some HTTP methods MUST cause a cache to invalidate an entity. This is
either the entity referred to by the Request-URI, or by the Location
or Content-Location headers (if present). These methods are:
- PUT
- DELETE
- POST
This language suggests POST requests are not cacheable, but that is not true (in this case). The cache is only invalidated for previously stored data. The RFC (appears to) explicitly clarify that yes, you can cache POST
requests:
9.5 POST
..
Responses to this method are not cacheable, unless the response
includes appropriate Cache-Control or Expires header fields. However,
the 303 (See Other) response can be used to direct the user agent to
retrieve a cacheable resource.
Despite this language, setting the Cache-Control
must not cache subsequent POST
requests to the same resource. POST
requests must be sent to the server:
13.11 Write-Through Mandatory
..
All methods that might be expected to cause modifications to the
origin server's resources MUST be written through to the origin
server. This currently includes all methods except for GET and HEAD.
A cache MUST NOT reply to such a request from a client before having
transmitted the request to the inbound server, and having received a
corresponding response from the inbound server. This does not prevent
a proxy cache from sending a 100 (Continue) response before the
inbound server has sent its final reply.
How does that make sense? Well, you're not caching the POST
request, you're caching the resource.
The POST response body can only be cached for subsequent GET requests to the same resource. Set the Location
or Content-Location
header in the POST response to communicate which resource the body represents. So the only technically valid way to cache a POST request, is for subsequent GETs to the same resource.
The correct answer is both:
Although the RFC allows for caching requests to the same resource, in practice, browsers and CDNs do not implement this behavior, and do not allow you to cache POST requests.
Sources:
Given the following example JavaScript application (index.js):
const express = require('express')
const app = express()
let count = 0
app
.get('/asdf', (req, res) => {
count++
const msg = `count is ${count}`
console.log(msg)
res
.set('Access-Control-Allow-Origin', '*')
.set('Cache-Control', 'public, max-age=30')
.send(msg)
})
.post('/asdf', (req, res) => {
count++
const msg = `count is ${count}`
console.log(msg)
res
.set('Access-Control-Allow-Origin', '*')
.set('Cache-Control', 'public, max-age=30')
.set('Content-Location', 'http://localhost:3000/asdf')
.set('Location', 'http://localhost:3000/asdf')
.status(201)
.send(msg)
})
.set('etag', false)
.disable('x-powered-by')
.listen(3000, () => {
console.log('Example app listening on port 3000!')
})
And given the following example web page (index.html):
<!DOCTYPE html>
<html>
<head>
<script>
async function getRequest() {
const response = await fetch('http://localhost:3000/asdf')
const text = await response.text()
alert(text)
}
async function postRequest(message) {
const response = await fetch(
'http://localhost:3000/asdf',
{
method: 'post',
body: { message },
}
)
const text = await response.text()
alert(text)
}
</script>
</head>
<body>
<button onclick="getRequest()">Trigger GET request</button>
<br />
<button onclick="postRequest('trigger1')">Trigger POST request (body 1)</button>
<br />
<button onclick="postRequest('trigger2')">Trigger POST request (body 2)</button>
</body>
</html>
Install NodeJS, Express, and start the JavaScript application. Open the web page in your browser. Try a few different scenarios to test browser behavior:
This shows that, even though you can set the Cache-Control
and Content-Location
response headers, there is no way to make a browser cache an HTTP POST request.
Browser behavior is not configurable, but if you're not a browser, you aren't necessarily bound by the rules of the RFC.
If you're writing application code, there's nothing stopping you from explicitly caching POST requests (pseudocode):
if (cache.get('hello')) {
return cache.get('hello')
} else {
response = post(url = 'http://somewebsite/hello', request_body = 'world')
cache.put('hello', response.body)
return response.body
}
CDNs, proxies, and gateways do not necessarily have to follow the RFC either. For example, if you use Fastly as your CDN, Fastly allows you to write custom VCL logic to cache POST requests.
Whether your POST request should be cached or not depends on the context.
For example, you might query Elasticsearch or GraphQL using POST where your underlying query is idempotent. In those cases, it may or may not make sense to cache the response depending on the use case.
In a RESTful API, POST requests usually create a resource and should not be cached. This is also the RFC's understanding of POST that it is not an idempotent operation.
If you're using GraphQL and require HTTP caching across CDNs and browsers, consider whether sending queries using the GET method meets your requirements instead of POST. As a caveat, different browsers and CDNs may have different URI length limits, but operation safelisting (query whitelist), as a best practice for external-facing production GraphQL apps, can shorten URIs.
If you do cache a POST response, it must be at the direction of the web application. This is what is meant by "Responses to this method are not cachable, unless the response includes appropriate Cache-Control or Expires header fields."
One can safely assume that the application, which knows whether or not the results of a POST are idempotent, decides whether or not to attach the necessary and proper cache control headers. If headers that suggest caching is allowed are present, the application is telling you that the POST is, in actuality, a super-GET; that the use of POST was only required due to the amount of unnecessary and irrelevant (to the use of the URI as a cache key) data necessary to perform the idempotent operation.
Following GET's can be served from cache under this assumption.
An application that fails to attach the necessary and correct headers to differentiate between cachable and non-cachable POST responses is at fault for any invalid caching results.
That said, each POST that hits the cache requires validation using conditional headers. This is required in order to refresh the cache content to avoid having the results of a POST not be reflected in the responses to requests until after the lifetime of the object expires.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With