Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Do I want a container handling multiple requests?

This question does not pertain to networking or hosting, but to how I architect my application: If I'm setting up a docker container to be a PHP web node, is the proper convention that I set it up such that it can handle multiple connections?

Alternatively, would it be better to set it up such that it handles requests one at a time, and then if I want to handle more connections concurrently, spin up multiple instances of the same image?

like image 633
Alexander Trauzzi Avatar asked Nov 11 '14 01:11

Alexander Trauzzi


2 Answers

First, please take a look at docker.io and the tutorial - it's important to have a solid understanding of how this is intended to be used before you get into solving specific architectural problems.

Now, in the PHP world, you would have Apache with mod_php (or nginx / php_fpm, or other) running within your container. That container will serve all incoming requests.

If you need to load-balance your application, then you would have another container (likely on another host) with a reverse proxy (like HAProxy) that would handle this for you. You could also configure your DNS to round-robin between your webserver instances with or without a HAProxy.

like image 131
creack Avatar answered Sep 22 '22 18:09

creack


A Docker container is generally meant to run a single (logical) application and limit the resources that that application uses (memory, disk I/O, network bandwidth, etc.). This doesn't necessarily mean running a single process; it may have ancillary things such as a process monitor, but in general a PHP container is only going to run the PHP interpreter. And that will run multiple copies of itself.

As we know, nginx and php-fpm are both perfectly capable of handing multiple simultaneous requests, up to the available resource limits. Thus a single container can serve multiple requests. So can Apache with mod_php, though in this case PHP is embedded in Apache and limits what Apache can handle. So you may well want to separate the web server and PHP into separate (linked) containers anyway.

So what will happen is that you eventually get enough traffic that a single container can't handle it all fast enough, or at all. At that point you will either want to enlarge the container or fire up a new one. Generally, nginx can handle thousands of simultaneous requests and PHP can handle dozens (these are ballpark figures). So at scale you may have 800 PHP containers served by four nginx containers, with two haproxy load balancer containers in front of them, all processing from 10,000 to a peak 40,000 requests a second.

All of those 800 PHP containers won't necessarily be on 800 hosts, though. This could be the case if you are using, say, t2.small AWS instances, where you would just have the container use all the resources of the VM, but if IT deploys to bare metal, which usually has much more resources than a single VM, you almost certainly will have multiple containers on that host, so as to utilize all of its resources. PHP tends to be CPU-bound, so IT may also run your containers side-by-side with totally unrelated containers which don't use much CPU but use lots of RAM or disk, for instance.

All that is to say, yes, your single container can and will handle multiple simultaneous requests, and your program should be aware that this is going on. This isn't usually an issue, but it does mean you should avoid issues that may cause contention such as database locking, or all your containers could grind to a screeching halt.

like image 35
Michael Hampton Avatar answered Sep 23 '22 18:09

Michael Hampton