[prev in list] [next in list] [prev in thread] [next in thread] 

List:       haproxy
Subject:    Connection limiting & Sorry servers
From:       Boštjan_Merčun <bostjan.mercun () dhimahi ! com>
Date:       2009-07-31 7:19:23
Message-ID: 1249024763.7337.3.camel () b-laptop
[Download RAW message or body]

Dear haproxy list,

This message will be a bit longer, I hope that somebody will read it
though and give me some opinion.
I am testing HAProxy in our load-balanced environment, which now works
with keepalived and the main reason for trying HAProxy is connection
limiting and ACL, which keepalived doesn't have.
We have more than 40 sites load balanced on 6 web servers and we have a
problem that from time to time some of the sites gets huge amount of
traffic. Much more then servers handle.

I hoped I would be able to automate this with HAProxy, but I have some
problems.
I would like to have limits as centralized as possible, that is why I
created one backend for all sites. In this case I can limit the taffic
on backend and have frontends use whatever they can.
The problem is that if I create the same rules for all frontends, they
behave in the same way; when ACLs are true all stop working and then all
start again. I would like to have all sites working even if one uses a
lot of resources (for example: one using 80% of all resources and all
others 20% or something like that).

My configuration is like that:

frontend my_site1
        bind XXX.XXX.XXX.1:8880
        default_backend main
        acl my_site_toofast be_sess_rate(main) gt 6
        acl my_site_toomany connslots(main) lt 10
        acl my_site_slow fe_sess_rate lt 1
        use_backend sorry if my_site_toomany or my_site_toofast !
my_site_slow

frontend my_site2
        bind XXX.XXX.XXX.2:8880
        default_backend main
        acl my_site2_toofast be_sess_rate(main) gt 6
        acl my_site2_toomany connslots(main) lt 10
        acl my_site2_slow fe_sess_rate lt 1
        use_backend sorry if my_site2_toomany or my_site2_toofast !
my2_site_slow


backend main
        option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
        server web1 YYY.YYY.YYY.1 check port 8880 inter 10s fall 2 rise
3 weight 10 maxconn 50 maxqueue 1
        server web2 YYY.YYY.YYY.2 check port 8880 inter 10s fall 2 rise
3 weight 10 maxconn 50 maxqueue 1
        server web3 YYY.YYY.YYY.3 check port 8880 inter 10s fall 2 rise
3 weight 10 maxconn 50 maxqueue 1
        server web4 YYY.YYY.YYY.4 check port 8880 inter 10s fall 2 rise
3 weight 10 maxconn 50 maxqueue 1
        server web5 YYY.YYY.YYY.5 check port 8880 inter 10s fall 2 rise
3 weight 10 maxconn 50 maxqueue 1
        server web6 YYY.YYY.YYY.6 check port 8880 inter 10s fall 2 rise
3 weight 10 maxconn 50 maxqueue 1

backend sorry
        option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
        server sorry YYY.YYY.YYY.7:1234 check inter 10s fall 5 rise 5

When "be_sess_rate(main) gt 6" gets true, all sites stop working - I get
the sorry page.
I don't want to split backends for each site because then I lose the
control over all traffic; setting them to 1 request per second would
limit each site too much and when all would work, they would flood the
backend.

I can not use fe_sess_rate to limit the traffic to frontend, because the
way it works, when we get a lot of traffic to one site, no user will
actually come through it. Or is there a way to redirect most users to
sorry servers but still let some users to the site even when the
connection rate to frontend is higher than configured fe_sess_rate?

Can someout with more experience than me advice what would be the best
way to handle this?

I would like to limit number of current users on real servers and amount
of new connections that can be created per some time unit. And also to
have sites as available as possible, meaning that when one gets a lot of
traffic, others would still work and use some limited amount of
resources, while the site with a lot of traffic gets what is left.

If I can clarify the situation any better, please let me know.

Thank you and best regards

                Bostjan



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic