This would be a perfect solution for me as I need to serve some generated content to web browsers. My plan is to generate the content on demand and store it for next time. I don't want the browsers to call my service (which generates the content) every time. I want them to go directly to the "cached" resource if it's available and only call the service if it's not. So I'd put Varinsh in front of server A which runs the service and server B which stores the previously generated content versions. If it gets a request for a resource it hasn't got cached it'll try server B. Upon getting a 404 response it'll request the same resource from server A.
Can Varnish be configured in such a way with VCL? If not is there a solution like that you know about?
P.S. I don't want to send 302 redirects to the browser plus I don't have control over server B to make it send such redirects instead of 404's
Varnish is a program that can increase the speed of a Web site while simultaneously reducing the load on the Web server. “Varnish is a “Web application accelerator also known as a caching HTTP reverse proxy” – according to Varnish's official website.
Varnish cache is a web application accelerator also known as caching HTTP reverse proxy. It acts more like a middle man between your client (i.e. user) and your web server. That means, instead of your web server to directly listen to requests of specific contents all the time, Varnish will assume the responsibility.
Varnish Configuration Language (VCL) is a domain-specific language designed for use in Varnish. VCL allows users to configure Varnish exactly how they see fit. It gives total control of content caching policies, HTTP behavior, and routing.
Varnish Cache is a web application accelerator also known as a caching HTTP reverse proxy. You install it in front of any server that speaks HTTP and configure it to cache the contents. Varnish Cache is really, really fast. It typically speeds up delivery with a factor of 300 - 1000x, depending on your architecture.
This is perfectly possible in Varnish. Make sure that in vcl_fetch (and possibly in vcl_error) you check the return status code (e.g. check for status > 400), do a restart if it failed, and in vcl_recv select the other backend if req.restarts > 0. For example:
backend serverA {
.host="192.168.0.1";
.port = "80";
}
backend serverB {
.host = "192.168.0.2";
.port = "80";
}
sub vcl_recv {
if (req.restarts == 0) {
set req.backend = serverB;
} else {
set req.backend = serverA;
}
}
sub vcl_fetch {
if (obj.status >= 400 && req.restarts == 0) {
restart;
}
}
sub vcl_error {
if (req.restarts == 0) {
restart;
}
}
But this being said, it sounds like you're reinventing the concept of a cache server. And Varnish is great cache server. Why not have one back-end server (serverA) and have Varnish cache your generated entities? You can setup complex rules and you'll get expiration (of the cache), purge management and performance for free! :)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With