Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Docker Swarm HAProxy Not Load Balancing w/ Overlay Networking

I have spent the past few day working on creating a docker swarm on Digtital Ocean. Note: I don't want to use -link to communicate with the other apps/containers becasue they are technically considered deprecated and don't work well with docker swarm (i.e. I can't add more app instances to the load balancer without re composing the entire swarm)

I am using one server as a kv-store server running console according to this guide. Becasue i'm on Digital Ocean, i'm using private networking on DO so the machines can communicate with each other.

I then create a hive master and slave, and start the overlay network, which is running on all machines. Here is my docker-compose.yml

proxy:
    image: tutum/haproxy 
    ports:
        - "1936:1936"
        - "80:80"

web:
    image: tutum/hello-world
    expose:
        - "80"

So when I do this it creates the 2 containers. HAProxy is running because I can access the stats at port 1936 at http://<ip-address>:1936, however, when I try to go to the web server/load balancer at port 80 I get connection refused. I everything seems to be connected though, when I run docker-compose ps:

       Name                      Command               State                                 Ports
--------------------------------------------------------------------------------------------------------------------------------
splashcloud_proxy_1   python /haproxy/main.py          Up      104.236.109.58:1936->1936/tcp, 443/tcp, 104.236.109.58:80->80/tcp
splashcloud_web_1     /bin/sh -c php-fpm -d vari ...   Up      80/tcp

The only thing I can think of is that it's not linking to the web container, but i'm not sure how to troubleshoot this.

I'd appreciate any help on this.

like image 736
Zach Russell Avatar asked Dec 24 '15 21:12

Zach Russell


People also ask

Which of the mode is used to bypass Swarm's routing mesh?

To bypass the routing mesh, you must use the long --publish service and set mode to host . If you omit the mode key or set it to ingress , the routing mesh is used.

Is HAProxy a load balancer?

HAProxy is a high-performance, open-source load balancer and reverse proxy for TCP and HTTP applications. Users can make use of HAProxy to improve the performance of websites and applications by distributing their workloads. Performance improvements include minimized response times and increased throughput.

Which type of load balancing does Docker use to distribute requests among services within the cluster based upon the DNS name of the service?

The swarm manager uses internal load balancing to distribute requests among services within the cluster based upon the DNS name of the service.

Does Docker swarm load balancer?

Summary. The Docker Swarm mode allows an easy and fast load balancing setup with minimal configuration. Even though the swarm itself already performs a level of load balancing with the ingress mesh, having an external load balancer makes the setup simple to expand upon.


1 Answers

you cannot use the tutum haproxy version here unfortunately. This image is specifically tailored around links. You do need some scripted way of passing the web server ip to haproxy I fear.

But this isn't all that hard :) I would suggest you start from this example: First setup the docker-compose.yml => lets use two nodes, just so you can make sure what you're doing makes sense and actually load balances along the way :)

proxy:
    build: ./haproxy/
    ports:
        - "1936:1936"
        - "80:80"
web1:
    container_name: web1
    image: tutum/hello-world
    expose:
        - "80"
web2:
    container_name: web2
    image: tutum/hello-world
    expose:
        - "80"

Now with haproxy you need to setup your own Dockerfile according to the official images documentation: https://hub.docker.com/_/haproxy/

I did this in the haproxy subfolder using the suggested file:

FROM haproxy
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

then for the haproxy config file haproxy.cfg I tested this:

global
    stats socket /var/run/haproxy.stat mode 660 level admin
    stats timeout 30s
    user root
    group root

defaults
    mode    http
    timeout connect 5000
    timeout client  50000
    timeout server  50000

frontend localnodes
    bind *:80
    mode http
    default_backend nodes

backend nodes
    mode http
    balance roundrobin
    option forwardfor
    http-request set-header X-Forwarded-Port %[dst_port]
    http-request add-header X-Forwarded-Proto https if { ssl_fc }
    option httpchk HEAD / HTTP/1.1\r\nHost:localhost
    server web01 172.17.0.2:80
    server web02 172.17.0.3:80

listen stats 
    bind *:1936
    mode http
    stats enable
    stats uri /
    stats hide-version
    stats auth someuser:password

Obviously the IPs here will only work in the default setup I'm fully aware of this :) You need to do something about those 2 lines:

server web01 172.17.0.2:80
server web02 172.17.0.3:80

I think you're in luck here working with Digital Ocean :) As far as I understand you do have private IP addresses at your disposal with DO under which you are planning to run the swarm nodes. I suggest to simply put those node IPs instead of my example IPs and run your web servers on them and you're good :)

like image 154
Armin Braun Avatar answered Sep 30 '22 04:09

Armin Braun