Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Horizontal scaling: routing user-generated subdomains between servers

I maintain a web application that is outgrowing a single VPS. The architecture consists of a large number of small users, each with their own subdomain. Users do not interact. Load means i have to move some users, and all new users, to another installation of the web application on a separate server.

Currently, every user subdomain falls to the same virtualhost, where a single PHP front controller displays the appropriate content based on the hostname. A single wildcard DNS record for *.mydomain.com points to the current server.

What is my best option for routing different user subdomains to different servers?

My thoughts:

  • A new top-level domain for every server. user.s1.mydomain.com, user.s2.mydomain.com and so on (inelegant and leaks information)
  • Run my own DNS server to route users between servers (extra point of failure, unfamiliar technology)
  • A central front controller / balancer that reverse-proxies every request to the appropriate server (extra point of failure, potentially limited connections)
like image 466
mappu Avatar asked Oct 09 '22 00:10

mappu


1 Answers

At that point in the scaling-out of the application, I'd go with a central front load balancer. Nginx should handle any load that is being served dynamically by one single server. We have nginx as a front end for six dynamic servers and one static-content server, and there are no bottlenecks in sight on nginx.

At your scale-point, setup nginx to handle all static content itself, and reverse proxy dynamic content to as many boxes as needed. The setup for simple proxy pass is close to:

upstream upstream_regular_backend {
    fair;
    server 10.0.0.1:80;
    server 10.0.0.2:80;
}

server {
    listen 0.0.0.0:80;
    server_name  example.com;
    proxy_set_header Host $host;
    proxy_set_header  X-Real-IP  $remote_addr;
    location / {
        proxy_pass http://upstream_regular_backend;
    }
}

For serving static content and passing back all the rest, something like:

server {
    listen 0.0.0.0:80;
    server_name  example.com;
    proxy_set_header Host $host;
    proxy_set_header  X-Real-IP  $remote_addr;
    index index.php;
    root /some/dir/:
    location ~ \.php {
        proxy_pass http://upstream_regular_backend;
    }
}

Naturally, if you are not using PHP, tweak the configuration accordingly.

On the upstream definition, "fair;" will load-balance backends based on response-time. For caching motives, you may want to use "ip_hash;" instead, as it will land requests from a client always on the same server.

Our setup is a bit further down the road. We have nginx load-balancers proxying a varnish cache, which in turn proxies the dynamic content servers.

If you are worried about nginx being a single-point-of-failure, setup a secondary server ready to assume the frontend's IP in case it fails.

like image 162
Sérgio Carvalho Avatar answered Oct 13 '22 19:10

Sérgio Carvalho