I maintain a web application that is outgrowing a single VPS. The architecture consists of a large number of small users, each with their own subdomain. Users do not interact. Load means i have to move some users, and all new users, to another installation of the web application on a separate server.
Currently, every user subdomain falls to the same virtualhost, where a single PHP front controller displays the appropriate content based on the hostname. A single wildcard DNS record for *.mydomain.com points to the current server.
What is my best option for routing different user subdomains to different servers?
My thoughts:
At that point in the scaling-out of the application, I'd go with a central front load balancer. Nginx should handle any load that is being served dynamically by one single server. We have nginx as a front end for six dynamic servers and one static-content server, and there are no bottlenecks in sight on nginx.
At your scale-point, setup nginx to handle all static content itself, and reverse proxy dynamic content to as many boxes as needed. The setup for simple proxy pass is close to:
upstream upstream_regular_backend {
fair;
server 10.0.0.1:80;
server 10.0.0.2:80;
}
server {
listen 0.0.0.0:80;
server_name example.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
location / {
proxy_pass http://upstream_regular_backend;
}
}
For serving static content and passing back all the rest, something like:
server {
listen 0.0.0.0:80;
server_name example.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
index index.php;
root /some/dir/:
location ~ \.php {
proxy_pass http://upstream_regular_backend;
}
}
Naturally, if you are not using PHP, tweak the configuration accordingly.
On the upstream definition, "fair;" will load-balance backends based on response-time. For caching motives, you may want to use "ip_hash;" instead, as it will land requests from a client always on the same server.
Our setup is a bit further down the road. We have nginx load-balancers proxying a varnish cache, which in turn proxies the dynamic content servers.
If you are worried about nginx being a single-point-of-failure, setup a secondary server ready to assume the frontend's IP in case it fails.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With