[prev in list] [next in list] [prev in thread] [next in thread] 

List:       openvswitch-discuss
Subject:    [ovs-discuss] Question on sending jumbo frames
From:       jesse () nicira ! com (Jesse Gross)
Date:       2014-01-30 20:37:53
Message-ID: CAEP_g=-KgGZ9dx6HMTZoEy3aD8U2b8crwHH=ze2R+8E+B+ShuA () mail ! gmail ! com
[Download RAW message or body]

On Mon, Jan 27, 2014 at 11:16 PM, Zhou, Han <hzhou8 at ebay.com> wrote:
> Hi Jesse,
>
> Now I changed eth0 MTU on guest to 1400 and eth0/br0 MTU on host back to 1500.
> Both guest and host eth0 are with TSO and GSO on.
>
> With iperf TCP test I can see large packets (64k) on vnet interface, and on br0
> packets are already fragmented to 14xx according to guest's MTU (plus GRE tunnel
> header).
>
> Is this fragmentation done by OVS? But I didn't see any fragmentation handling in
> datapath/vport_gre.c. gre_handle_offload() is invoked there but it seems not doing
> any fragmentation? Could you help explain which part of the system performs the
> inner packet fragmentation, and how is the guest MTU information passed to it?

The MTU requested by the guest is passed as metadata along with the
packet data. In a non-tunneled environment this would go to the NIC to
perform TSO to the correct size.

Fragmentation is setup by gre_handle_offloads() and handled by OVS
compatibility code in this case so any inner fragmentation would
happen before your tcpdump on br0.

> In addition, setting MTU of br-int doesn't affect the results. (e.g. setting br-int MTU
> 1000 still results in packet size 14xx). So what would br-int MTU affect?

br-int is connected to the hypervisor stack and before tunneling, so
no packets are flowing over it in this case.

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic