[prev in list] [next in list] [prev in thread] [next in thread] 

List:       squid-users
Subject:    RE: [squid-users] accel mode and peering
From:       sean.upton () uniontrib ! com
Date:       2001-08-31 20:24:00
[Download RAW message or body]

At my company, we have 2 peer squid boxes as accelerators, and have an Intel
Netstrcuture 7140 (L4 switch) in front of them; this has the advantage of
not needing to proxy all the outgoing packets from squid out through the
load-balancer, thanks to source-address-preservation and out-of-path return
techniques.  This means that packets get sent directly back to the
requesting client from squid, instead of the the load-balancer.

The only problem with this sort of setup is that you need to use a
node-takeover based HA mechanism, because you sacrifice the LB's ability to
take down nodes out of the pool; the solution for this is IP address
takeover on the boxes running squid with something like heartbeat.

This works relatively well; we have some small issues with this
configuration (on the load-balancer), which once worked out, I'm sure I will
post some info on this setup to the list.

Sean

-----Original Message-----
From: Brian [mailto:hiryuu@envisiongames.net]
Sent: Thursday, August 30, 2001 1:07 PM
To: Dave A King; 'squid-users@squid-cache.org'
Subject: Re: [squid-users] accel mode and peering


On Thursday 30 August 2001 03:10 pm, Dave A King wrote:
> I am fairly new to squid and have what may be a couple newbie questions.
>  I am attempting to set up multiple squid caches in HTTP accelerator
> mode to cache a pool of HTTP servers.  The way I was thinking of setting
> them up is to load balance the squids in addition to the real HTTP
> servers (using an existing hardware LB device).  The content I'd like to

Hardware LB in front of the squids is good, but consider having the squids 
handle the backend balancing on their own (as parent caches or with the 
new rproxy patches (http://squid.sourceforge.net/rproxy/backend.html)).  
The squids are very good at detecting overloads and downings and 
reassign the request.  It also reduces the load on the LB switch.

> cache can be rather large (large files are 1MB+, other smaller content
> exists as well).  Each server has ~50GB of disk for the cache, with at
> least 4 squid servers.  The total amount of content on the real servers
> will be 500GB+, via NAS, and I want to try to reduce the NAS traffic.
>
> I'd like to maximize the amount of frequently accessed data that is
> cached, so I was debating using peering to split that content across the
> pool of cache servers.  I've tried configuring ICP peering (unicast)

If maximum caching is the goal, that will help.  Keep in mind, though, 
that by the time you send, receive, and react to ICP, you've added a heap 
of latency to that request.  (That's fine on the client end, where a hit 
at almost any cost is faster than contacting the origin server for a 
miss.)  Assuming the front-end caches have enough space for your most 
popular files, letting them act alone will drastically improve response 
time with only a minor impact on the overall hit rate.

> with each server as a sibling to the others, and digest are enabled.  I
> don't know if this is the best config, but even still, no ICP traffic is
> occuring.  This is what Cache Manager has to say about it:

> Am I missing something with the cache peering?
> Is unicast ICP the preferred method to reach my goal?

What do your cache_peer and cache_peer_access lines look like?

	-- Brian

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic