[prev in list] [next in list] [prev in thread] [next in thread]
List: linux-vlan
Subject: [VLAN] Re: [VLAN] Re: [VLAN] port to 2.4.0-test10pre5, modularization, etc..y
From: Matti Aarnio <matti.aarnio () zmailer ! org>
Date: 2000-10-26 11:00:57
[Download RAW message or body]
On Thu, Oct 26, 2000 at 12:18:55AM -0700, Ben Greear wrote:
> Matti Aarnio wrote:
...
> > I also wanted to see, if your code can be changed from
> > its form where it really modifies skb_pull() logic
> > at net/ethernet/eth.c::eth_type_trans() into one
> > where the vlan_dev_type_trans() (a misnamed function;
> > vlan_recv() is more appropriate name) simply does
> > processing of the possible VLAN header.
>
> Does tcpdump on eth0, as well as vlan0005 look correct when you pull
> off stuff in the eth.c handler?
Yes. The standard tcpdump at RH 7.0 is broken, though.
It has incorrect ETHERTYPE for 802.1q frame.
(I just reported that to the RH Bugzilla.)
Having that fixed, it works just fine.
> > For various testing reasons I also wanted it modular,
> > and new modularization rules caught me by a surprise
> > a bit, took me an hour to get that into order.
>
> I'm particularly interested in integrating this...
I can send you the salient bits latter today.
> > > What advantages does your method yield?
> >
> > The VLAN frame is *protocol* layer on top of whatever transport.
> > Keeping them separate is what I am aiming at.
>
> Not a bad goal. If tcpdump and other things, like maybe firewalling and NAT
> type code can handle the semi-broken header coming into the vlan device
> (ie missing the first 14 bytes), then I'll definately consider it.
They are handling it already, what is the problem ?
> > Otherwise you would need to modify all various xxxx_type_trans()
> > functions not to pull their "hard-headers" from over VLAN frames,
> > but still do that in all other cases - plus vlan_dev_type_trans()
> > would need to know all supported Layer2 frames. :-(
>
> I don't think there will be too many of these methods...right now there
> is only one (eth.c), and the only ones I can think of adding would be for FR
> and ATM encapsulated ethernet/vlan streams. Maybe PPP??
IM(NSH)O there are two possible amounts: "zero" and "too many"
Ethernet bridging over ATM is something which will come to
play latter on (among other things.)
> > Also I don't like protocols adding new private data objects into
> > devices/sockets, if possible.
>
> I'm getting rid of most of those, like the default_VID stuff. There will
> be one entry into the net_device struct to hold the VLAN private info though.
That is quite ok.
Some of the contortations present currently could be solved
by making the device subsystem really bidirectional pipes:
- For outbound there is already a chain of "call this vector
to send this frame"
- For inbound there is *no* vector for Layer2 internal
reception processing.
I mean, all of the device drivers currently call directly the
netif_rx(), which actually is Layer3 entrypoint.
If we had Layer2->Layer2feature->whatever stacking capability
for the reception, then we could handle things like "default_vid"
there by knowing who called the frame. And it would be trivial
to capture all frames coming up via e.g. VLAN handling without
allowing untagged IP protocol frames to leak thru (e.g. 802.1q
VLAN1 traffic) to the system.
Oh yes, and the generic device "priv" should be enough when
proper layering can be achieved also in reception direction.
I do think that bridging (and diverter) could then be implemented
as a Layer-2 "feature" processors, and all present netdevice
contortations would not be needed.
> > > Now, though, I think I'll just have one vlan space per device. The reason
> > > is that w/out bridging in the VLAN code, you will not be able to have
> > > VLAN 5 on two different NICs, unless you allow separate name spaces.
> > > On the other hand, I see no good reason to have only one space for
> > > all devices...
> >
> > Not everybody places GE cards at their machine for VLAN trunking.
> > There might indeed be users who want to have more physical
> > bandwidth into the VLAN switch cloud by having multiple of
> > similar FE cards. Then distribute VLANs into trunks by the
> > expected loads.
>
> How does having a single VLAN space help this? Are you thinking of doing
> load-sharing/bridging in the VLAN code? (I had this for a while, and
> ripped it out. Other methods, higher in the stack,
> can do it better, IMHO.)
eth0.vlan2 10 Mb/s
eth0.vlan3 30 Mb/s
eth0.vlan4 60 Mb/s
eth1.vlan5 60 Mb/s
eth1.vlan6 30 Mb/s
That is, when single FE isn't enough, why not use parallel connections
to handle subsets of VLANs at each trunk.
> > > > I have thought of doing cases where router in between two
> > > > VLAN switching clouds is routing among all of those VLANs.
> > > > (More than 4000 VLANs is madness, but never mind..)
> > >
> > > Think VLAN per client, where each client is on DSL, and expected
> > > aggregation is 100Mbps per 2000 clients. With several 100bt or
> > > a Gb NIC, you could probably run 10k clients/VLANs on a large box...
> >
> > DSL concentrators are one extreme possibility, but somewhat
> > unlikely at those sizes.
> > (Price of a PC is *small* compared to other hardware items
> > in DSL DSLAMs, putting two boxes instead of one is peanuts.)
>
> Agreed, but rack space and port count are very important in those
> situations... Either way, does this impact the choice of one
> vlan space v/s space-per-device?
No. Usage needs rule which space-style is needed.
Say there are spaces per interface-groups.
For example: eth0 and eth2 share the same, eth1 has
different space.
> --
> Ben Greear (greearb@candelatech.com) http://www.candelatech.com
> http://scry.wanfear.com http://scry.wanfear.com/~greear
/Matti Aarnio
_______________________________________________
VLAN mailing list - VLAN@Scry.WANfear.com
http://www.WANfear.com/mailman/listinfo/vlan
VLAN Page: http://scry.wanfear.com/~greear/vlan.html
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic