[prev in list] [next in list] [prev in thread] [next in thread] 

List:       squid-dev
Subject:    Re: EAGAIN on bind in commResetFD?
From:       Henrik Nordstrom <hno () squid-cache ! org>
Date:       2003-10-10 5:22:38
Message-ID: Pine.LNX.4.44.0310100721250.8342-100000 () localhost ! localdomain
[Download RAW message or body]

This question probably belongs more on squid-users than squid-dev, or 
perhaps a FreeBSD list..


To me it smells like you ran out of some kind of socket related resource 
preventing new sockets from being established. Maybe you are out os local 
port numbers or something similar..

Regards
Henrik

On Wed, 8 Oct 2003, Benno Rice wrote:

> Hi.
> 
> I've got a system running 4.0-RELEASE (yes, I know how old that is) and
> squid 2.5-STABLE3.  It handles a rather large amount of traffic (mainly
> via WCCP although there's a couple of sites going direct to it), and we
> recently had to up it's max file descriptors to well over the 7000-odd
> that's default on FreeBSD.  We then recompiled squid, started everything
> up again and now we're getting these in the cache.log:
> 
> 2003/10/08 09:18:25| commResetFD: bind: (35) Resource temporarily unavailable
> 2003/10/08 09:18:25| commBind: Cannot bind socket FD 1535 to *:0: (35) Resource 
> temporarily unavailable
> 2003/10/08 09:18:25| commBind: Cannot bind socket FD 6617 to *:0: (35) Resource 
> temporarily unavailable
> 2003/10/08 09:18:25| commResetFD: bind: (35) Resource temporarily unavailable
> 2003/10/08 09:18:25| commBind: Cannot bind socket FD 6618 to *:0: (35) Resource 
> temporarily unavailable
> 2003/10/08 09:18:25| commResetFD: bind: (35) Resource temporarily unavailable
> 2003/10/08 09:18:27| commBind: Cannot bind socket FD 5662 to *:0: (35) Resource 
> temporarily unavailable
> 2003/10/08 09:18:27| commBind: Cannot bind socket FD 5662 to *:0: (35) Resource 
> temporarily unavailable
> 
> Now this seems to me that the bind call is getting interrupted by
> something and that squid doesn't deal with that very well.  Am I reading
> it right?  If so, is there a way around this?  We're trying to get more
> systems in there to help share the load a bit but this is hitting pretty
> hard and we'd like at least a band-aid in there until we can get another
> machine up and running along side.
> 
> Many thanks.
> 
> 


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic