[prev in list] [next in list] [prev in thread] [next in thread] 

List:       nginx
Subject:    Re: Limiting number of client TLS connections
From:       Zero King <l2dy () aosc ! io>
Date:       2024-03-30 7:34:40
Message-ID: 7dcf03ce-5240-40ff-afe9-70df04373ed9 () aosc ! io
[Download RAW message or body]

Hello,

With the new pass directive committed, I should be able to implement it 
with less overhead as you have suggested. 
https://hg.nginx.org/nginx/rev/913518341c20

I'm still trying to push our platform team to implement a firewall, but 
this gives me an interim solution. Thanks a lot!

P.S. I also see that Nginx has introduced an ssl_reject_handshake 
directive. It would be interesting if its behavior can be scripted.

On 12/9/23 4:38 AM, J Carter wrote:
> Hello again,
> 
> By coincidence, and since my previous email, someone has kindly submitted a fixed
> window rate limiting example to the NJS examples Github repo.
> 
> https://github.com/nginx/njs-examples/pull/31/files/ba33771cefefdc019ba76bd1f176e25e18adbc67
>  
> https://github.com/nginx/njs-examples/tree/master/conf/http/rate-limit
> 
> The example is for rate limiting in http context, however I believe you'd be
> able to adapt this for stream (and your use case) with minor modifications
> (use js_access rather than 'if' as mentioned previously, setting key to a
> fixed value).
> 
> Just forwarding it on in case you need it.
> 
> 
> On Sat, 25 Nov 2023 16:03:37 +0800
> Zero King <l2dy@aosc.io> wrote:
> 
> > Hi Jordan,
> > 
> > Thanks for your suggestion. I will give it a try and also try to push
> > our K8s team to implement a firewall if possible.
> > 
> > On 20/11/23 10:33, J Carter wrote:
> > > Hello,
> > > 
> > > A self contained solution would be to double proxy, first through nginx stream \
> > > server and then locally back to nginx http server (with proxy_pass via unix \
> > > socket, or to localhost on a different port). 
> > > You can implement your own custom rate limiting logic in the stream server with \
> > > NJS (js_access) and use the new js_shared_dict_zone (which is shared between \
> > > workers) for persistently storing rate calculations. 
> > > You'd have additional overhead from the stream tcp proxy and the njs, but it \
> > > shouldn't be too great (at least compared to overhead of TLS handshakes). 
> > > Regards,
> > > Jordan Carter.
> > > 
> > > ________________________________________
> > > From: nginx <nginx-bounces@nginx.org> on behalf of Zero King <l2dy@aosc.io>
> > > Sent: Saturday, November 18, 2023 6:44 AM
> > > To: nginx@nginx.org
> > > Subject: Limiting number of client TLS connections
> > > 
> > > Hi all,
> > > 
> > > I want Nginx to limit the rate of new TLS connections and the total (or
> > > per-worker) number of all client-facing connections, so that under a
> > > sudden surge of requests, existing connections can get enough share of
> > > CPU to be served properly, while excessive connections are rejected and
> > > retried against other servers in the cluster.
> > > 
> > > I am running Nginx on a managed Kubernetes cluster, so tuning kernel
> > > parameters or configuring layer 4 firewall is not an option.
> > > 
> > > To serve existing connections well, worker_connections can not be used,
> > > because it also affects connections with proxied servers.
> > > 
> > > Is there a way to implement these measures in Nginx configuration?
> > > _______________________________________________
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic