I have a server that runs HAProxy to load balance our mysql servers. Some of the server may go down when we have low average load for a extensive period of time, but, in the future, if the load becomes high again, those servers goes up automatically. The problem is when an instance goes down, HAProxy never looks to it again, so when the instance is up again, it is ignored. To fix this we reboot when needed.
Here is our configuration file:
global
log 127.0.0.1 local0 notice
user haproxy
group haproxy
defaults
log global
retries 2
timeout connect 3000
timeout server 5000
timeout client 5000
listen mysql-cluster
bind 0.0.0.0:3306
mode tcp
option mysql-check user haproxy_check
balance leastconn
server mysql-1 ********:3306 check
server mysql-2 ********:3306 check
Maybe if I change the retries from 2 to a big number it could solve our problem?
EDIT As requested, here is my HAProxy version:
$ haproxy -v
HA-Proxy version 1.4.24 2013/06/17
Copyright 2000-2013 Willy Tarreau <[email protected]>
Thanks
If the primary load balancer goes down, the Reserved IP will be moved to the second load balancer automatically, allowing service to resume. Note: DigitalOcean Load Balancers are a fully-managed, highly available load balancing service.
HAProxy Algorithms It works by using each server behind the load balancer in turns, according to their weights. It's also probably the smoothest and most fair algorithm as the servers' processing time stays equally distributed. As a dynamic algorithm, Round Robin allows server weights to be adjusted on the go.
HAProxy is really just a load balancer/reverse proxy. Nginx is a Webserver that can also function as a reverse proxy.
The frontend is the node by which HAProxy listens for connections. Backend nodes are those by which HAProxy can forward requests. A third node type, the stats node, can be used to monitor the load balancer and the other two nodes.
We had similar situation in our AWS infrastructure: HAProxy on every instance for RDS replicas access (we have 3 replicas, but application able to work only with one hostname). We solve this issue globally by replacing HAProxy with Route53 internal using with multiple records with same name ( for example db.example.internal) and same weight (Weighted Route53 policy). Also we create Route53 health check for every replica (TCP 3306 port check). For us this solution works great and if we need to add/delete RDS replica - the only place where we need to change - Route53 record and health check (we don't need to maintain every instance HAProxy status/conf)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With