I am using an Apache Httpd instance as proxy in front of multiple Java Tomcat instances. Apache acts as load balancer for the Tomcat instances.
The apache config basically looks like follows
<Proxy balancer://mycluster>
BalancerMember ajp://host1:8280 route=jvmRoute-8280
BalancerMember ajp://host2:8280 route=jvmRoute-8280
BalancerMember ajp://host3:8280 route=jvmRoute-8280
</Proxy>
<VirtualHost *:80>
ProxyPass / balancer://mycluster/
ProxyPassReverse / balancer://mycluster/
</VirtualHost>
This basically works when the AJP ports are configured in the Tomcat instances. Requests are sent to one of the hosts and the load is distributed across the Tomcat instances.
However I see very long delays that seem to be caused inside Httpd whenever one of the hosts is not available, i.e. it seems Apache does not remember that one of the hosts is not available and repeatedly tries to send requests also to the missing hosts instead of sending it to one of the available hosts and trying the failing host at some time later.
Is there a way to configure mod_proxy et.al. from Apache Httpd to support such a failover scenario, i.e. having multiple hosts and don't cause huge delays when one host fails? Preferably Apache should periodically check in the background which hosts are gone and not as them for any requests.
I did find HAProxy which seems to be more suited for this kind of thing, but I would prefer to stick with Apache for a number of unrelated reasons.
In the meantime I found out that part of my problem was caused by clients which kept the connection open endlessly and thus no more connections/threads were available.
Thus I change the question to: What configuration options would you use to minimize the effect of something like this? I.e. allow many open connections or close them quickly in this case? Otherwise this sounds like a very easy DOS-attack with my current config?
Apache load balancer is open source and provides a server application traffic distribution solution. According to recent statistics, it has been utilized in over 100,000 websites.
ProxyPassReverse will intercept those headers, and rewrite them to match the Apache proxy server. ProxyPass will create a reverse proxy. A reverse proxy (or gateway), appears to the client just like an ordinary web server. The client makes ordinary requests for content in the namespace of the reverse proxy.
All the configuration files for Apache are located in /etc/httpd/conf and /etc/httpd/conf. d . The data for websites you'll run with Apache is located in /var/www by default, but you can change that if you want.
It seems you have forgotten the ping tag (Actually it's called CPING - 100-Continue)
Like so:
<Proxy "balancer://www">
BalancerMember "http://192.168.0.100:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.101:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.102:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.103:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.104:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.105:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
BalancerMember "http://192.168.0.106:80" max=128 ttl=300 retry=60 connectiontimeout=5 timeout=300 ping=2
SetEnv proxy-nokeepalive 1
</Proxy>
ProxyPass "/www/" "balancer://www/"
ProxyPassReverse "/www/" "balancer://www/"
Clients will not keep the connection open endlessly. Check your Apache server-tuning.conf and look for the KeepAliveTimeout setting. Lower it to something sensible.
Your changes to connectiontimeout and retry are indeed what you have to do. I'd lower connectiontimeout though. 10 seconds is still ages. If the back end is in the same location why not set it in miliseconds? connectiontimeout=200ms should leave plenty of time to set up the connection.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With