Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

HAProxy - Why is time taken to get client requests very high?

We have a haproxy (v 1.5.1) setup on Amazon EC2 which is doing two jobs

  1. Routing traffic based on subdomain of the request
  2. SSL Termination

ulimit on our server is 128074 and concurrent connections are ~3000.

Our config file looks like below. The problem we are facing is that the time Tq is very high (2-3 secs) in haproxy logs. Is there anything wrong with the config or something we are missing??

global
    daemon
    maxconn 64000
    tune.ssl.default-dh-param 2048
    log 127.0.0.1 local0 debug

defaults
    mode http
    option abortonclose
    option forwardfor
    option http-server-close
    option httplog
    timeout connect 9s
    timeout client 60s
    timeout server 30s
    stats enable
    stats uri /stats
    stats realm Haproxy\ Statistics
    stats auth username:nopass

frontend www-http
    bind *:80

    maxconn 64000
    http-request set-header U-Request-Source %[src]
    reqadd X-Forwarded-Proto:\ http

    errorfile 503 /var/www/html/sorry.html

    acl host_A    hdr_dom(host) -f /etc/A.lst
    acl host_B    hdr_dom(host) -f /etc/B.lst
    use_backend www-A         if host_A
    use_backend www-B         if host_B
    log global

frontend www-https 
    bind *:443 ssl crt /etc/ssl/private/my.pem no-sslv3
    http-request set-header U-Request-Source %[src]
    maxconn 64000
    reqadd X-Forwarded-Proto:\ https

    errorfile 503 /var/www/html/sorry.html

    acl host_A        hdr_dom(host) -f /etc/A.lst
    acl host_B        hdr_dom(host) -f /etc/B.lst

    use_backend www-A if host_A
    use_backend www-B if host_B
    log global


backend www-A
    redirect scheme https if !{ ssl_fc }
    server app1 app1.a.mydomain.com:80 check port 80

backend www-B
    redirect scheme https if !{ ssl_fc }
    server app1 app1.b.mydomain.com:80 check port 80
like image 901
Amit Dalal Avatar asked Oct 24 '25 11:10

Amit Dalal


1 Answers

My first thought was this, from the HAProxy docs:

If Tq is close to 3000, a packet has probably been lost between the client and the proxy. This is very rare on local networks but might happen when clients are on far remote networks and send large requests.

...however, that's typically only true when Tq is really close to 3000 milliseconds. I see this in the logs on trans-continental connections, occasionally, but it's pretty rare. Instead, I suspect what you are seeing is this:

Setting option http-server-close may display larger request times since Tq also measures the time spent waiting for additional requests.

That's the more likely explanation.

You can confirm this by finding one of the "suspect" log entries, and then scrolling up to find a previous one from the same source IP and port.

Examples, from my logs:

Dec 28 20:29:00 localhost haproxy[28333]: x.x.x.x:45062 [28/Dec/2014:20:28:58.623] ...  2022/0/0/12/2034 200 18599 ... 

Dec 28 20:29:17 localhost haproxy[28333]: x.x.x.x:45062 [28/Dec/2014:20:29:00.657] ... 17091/0/0/45/17136 200 19599 ...

Both of these requests are from the same IP address and the same source port -- therefore, this is two requests from the same client connection, separated in time by ~17 seconds (I allow keepalives longer than the default on this particular proxy cluster).

The Tq timer (above, the values are 2022 ms and 17091 ms) is the "total time to get the client request" -- on the initial request from any given client, this timer stops when the line break at the end of the headers is decoded. But, on subsequent requests, this timer also includes the idle time that elapsed after the end or the previous request before the arrival of the next request. (If I go back further, I find still more requests from the same IP/port pair, until I arrive at the first one, which actually had a Tq of 0, though this won't always be the case.)

If you can backtrack in the logs and find previous requests from the same client IP and port where the times all add up, then this is all you are seeing -- HAProxy is counting the time spent on an open, kept-alive connection, waiting for the next request from the client... so this behavior is quite normal and should not be cause for concern.

Using option http-server-close allows the client-side connection to stay open, while closing the server connection after each request, giving you the advantage of being able to keep alive connections to the client, which optimizes the (typically) longer path (in terms of latency) in the chain, while not tying up back-end server resources with idle connections.

http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.4

like image 117
Michael - sqlbot Avatar answered Oct 26 '25 00:10

Michael - sqlbot



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!