[prev in list] [next in list] [prev in thread] [next in thread] 

List:       pfsense-discussion
Subject:    Re: [pfSense] Using to interfaces to access two separate switches that are in the same subnet.
From:       "john () millican ! us" <john () millican ! us>
Date:       2013-06-20 17:29:41
Message-ID: 51C33C05.4090508 () millican ! us
[Download RAW message or body]

On 6/20/2013 10:49 AM, Adam Thompson wrote:
> I've re-read your emails a few times, and I'm still not quite 100% certain what you \
> hoped to accomplish with the two switches.  Based on what I understand of it, \
> however, I suggest two options: 
> 1) VLAN+LAG ("tagged and trunked")
> 	-higher complexity
> 	-more difficult to troubleshoot
> 	-possibly higher bandwidth (not guaranteed)
> 	-fewer switch ports consumed (as many subnets as you want, only four Ethernet \
>                 ports in total)
> 	-requires cross-stack LAG for full redundancy
> 	-more 'elegant' solution
> 
> 2) multiple Ethernet ports
> 	-simpler to setup and manage
> 	-limited to 1Gbit/sec of traffic
> 	-many switch ports get used up (every new subnet requires two more Ethernet ports)
> 	-no cross-stack LAG involved
> 
> 
> Your switches do not support "stacking", so there's no way to stack them, and \
> there's no possibility of cross-stack LAG.  This means that, at a minimum, you \
> should have each pfSense box plugged into a separate switch.  Stacking means \
> something very different from simply interconnecting two (or more) switches; it \
> implies that the switches now share a common control plane and your two switches \
> would behave as one large 32-port switch.  (Well, yours won't, but other models \
> would.) 
> In any layer-2 environment, all switches handling traffic on the same IP subnet \
> must be interconnected, and that means you must deal with Spanning Tree.  With only \
> two switches, STP isn't difficult, and HP's implementation of it is sane; you do \
> want to ensure something other than the original, basic, STP (802.1d) is enable \
> however.  I think HP ships with 802.1W (usually called RSTP) enabled by default. 
> When creating a 2-member LAG, you don't actually get 2Gbit/sec connections; what \
> you get is the ability to handle more traffic between more pairs of hosts.  Per the \
> 802.1ad (LACP) specification, any single conversation between a pair of hosts must \
> be limited to only use one of the LAG member links - so no single conversation can \
> ever exceed the originally-available speed.  Typically, LACP implementations use a \
> hashing algorithm based on the source and destination MAC addresses to decide which \
> link to send the traffic across.  What you gain is redundancy, and the capacity to \
> handle more conversations between more pairs of hosts.  Incidentally, this \
> technique works much better between switches than between hosts and switches.  You \
> should always use LAGs to interconnect your switches, if you have enough ports to \
> do so!  (STP works fine over LAGs, don't worry about that.) 
> If you're using VMware, forget about using "teamed" connections or LAG of any kind \
> - use failover NICs instead.  VMware has some excellent (albeit disappointing) \
> documents on how to architect a vSphere cluster for maximum redundancy. 
> All in all, I would recommend using three independent Ethernet ports per pfSense \
> box instead of using LAGs: 1 for mgmt., 1 for WAN, 1 for service LAN.  (It might \
> make sense to use the LAN port as your management port, if that's the IP subnet you \
> want to manage pfSense from; then you're back down to two ports per pfSense box.)  \
> Plug all two/three links into the same switch, into ports configured to the \
> appropriate VLANs.  This might not protect, in the worst case, against the failure \
> of a single Ethernet port or patch cable, but will dramatically simplify \
> troubleshooting when something doesn't work.  The problem scenario with LAGs is \
> when something has gone wrong with either LACP or the VLAN tagging, but you need \
> both LACP and VLAN tagging to work correctly in order to reach the pfSense box in \
> order to diagnose the problem: catch-22. Of course, if you need the pfSense boxes \
> to route between more than two to three VLANs, then you probably should go back to \
> tagged-and-trunked in order to conserve Ethernet ports.  Keep a separate Management \
> port configured to work around the troubleshooting catch-22 mentioned above. 
> If your pfSense boxes are virtualized, there's a noticeable performance penalty to \
> using the LAG+VLAN technique.  Also, virtual Ethernet ports don't cost you anything \
> anyway. 
> Feel free to poke holes in this and ask questions!
> 
> -Adam Thompson
> athompso@athompso.net

Adam,
Thank You for all your help, I do appreciate it.  I am running 
cloudstack using KVM for all hypervisors for a semi private cloud and 
the pfSense boxes are not virtualized.
What I ended up doing (and it may be all wrong but it is working great 
so far) is to use LAGG with round robin.  So two NICs on each pfSense 
box, the two HP switches, and two NICs on each Linux host bonded with 
balance-rr as the mode.  On each of the hosts and the pfSense boxes, one 
NIC is in SW1 and the other is in SW2(with no interconnect of any type 
between the switches).  All hosts are on the same 192.168.100.0/24 
network.  I have done some VERY limited testing for port failure and it 
seems to work as expected such that if I unplug an Ethernet cable 
anywhere, all traffic continues to flow with little or no packet loss.  
I will be testing with more volume over the weekend to our application 
and will try the same failure simulation (unplug the wire) and see what 
happens.  Unless I am missing something, which as we know is very 
possible, I may well have found my answer.  I believe this is a bit more 
reasonable, do to the fact that it is working, than my earlier "not 
going to work" idea, do you agree?  When I get some more hardware budget 
I will be adding NICs and switches and separating out the traffic but 
that is just not possible yet.  I just had to ensure that I could set up 
a mirrored set of VMs, load balance between them with pfSense, and not 
have a  "single point of failure" in the network or in the hardware.  
Any one thing fails and we may be slower than normal but we will still 
be running.
Thanks again,
JohnM

<snip>



_______________________________________________
List mailing list
List@lists.pfsense.org
http://lists.pfsense.org/mailman/listinfo/list


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic