[prev in list] [next in list] [prev in thread] [next in thread] 

List:       openvswitch-discuss
Subject:    [ovs-discuss] vxlan udp csum and openstack ovs performance
From:       jesse () kernel ! org (Jesse Gross)
Date:       2016-02-25 17:52:13
Message-ID: CAEh+42hJfrhVcyu4Xq+MtcUwsph-9j6kpteJkeczf6MgVY0G1A () mail ! gmail ! com
[Download RAW message or body]

On Thu, Feb 25, 2016 at 5:14 AM, kldeng <kldeng05 at gmail.com> wrote:
> Hi, ALL.
>
> Recently, we're trying to fix the poor VXLAN throughput issue by applying
> the series of patches @Ramu posted in ovs-dev mailing-list.
> http://openvswitch.org/pipermail/dev/2015-August/059337.html
>
> Our test environment are as belows.
> ------------------------------------------
> OVS: 2.4.0
> Kernel: 3.18.22
>
> Topology:
> ------------------------------------------
> vm1 -- bridge -- vxlan -- 10Gnic -- 10Gnic -- vxlan -- bridge -- vm2
>
> vxlan port settings:
> ------------------------------------------
> Port "vxlan-0acd614d"
> Interface "vxlan-0acd614d"
> type: vxlan
> options: {csum="true", df_default="true", in_key=flow,
> local_ip="10.205.97.135", out_key=flow, remote_ip="10.205.97.77"}
>
>
> Patch applied:
> ------------------------------------------
> --- a/src/net/openvswitch/vport-vxlan.c
> +++ b/src/net/openvswitch/vport-vxlan.c
> @@ -100,6 +100,7 @@ static struct vport *vxlan_tnl_create(const struct
> vport_parms *parms)
> struct nlattr *a;
> u16 dst_port;
> int err;
> + u32 vxlan_sock_flags = 0;
>
> if (!options) {
> err = -EINVAL;
> @@ -122,7 +123,8 @@ static struct vport *vxlan_tnl_create(const struct
> vport_parms *parms)
> vxlan_port = vxlan_vport(vport);
> strncpy(vxlan_port->name, parms->name, IFNAMSIZ);
>
> - vs = vxlan_sock_add(net, htons(dst_port), vxlan_rcv, vport, true, 0);
> + vxlan_sock_flags |= VXLAN_F_UDP_CSUM;
> + vs = vxlan_sock_add(net, htons(dst_port), vxlan_rcv, vport, true,
> vxlan_sock_flags);
> if (IS_ERR(vs)) {
>
>
> But, it seems doesn't work, the VXLAN throughput is still very low(about
> 1.7Gbps).
>
> After inspecting, we found GRO is seems effective for VXLAN traffic for we
> can see aggregated packets on receiver nic and the function call graph also
> proves that.
>
> - 0.60% 0.12% vhost-31236 [kernel.kallsyms] [k] tcp_gro_receive ▒
> - tcp_gro_receive ▒
> - 99.74% tcp4_gro_receive ▒
> inet_gro_receive ▒
> vxlan_gro_receive ▒
> udp_gro_receive ◆
> udp4_gro_receive ▒
> inet_gro_receive ▒
> dev_gro_receive ▒
> napi_gro_receive ▒
> ixgbe_clean_rx_irq ▒
> ixgbe_poll ▒
> net_rx_action ▒
> + __do_softirq
>
> But, on the qvo, qvb interface, the packet is fragemented again.

If the problem is that packets are getting aggregated and then
segmented again, the issue is likely the same as the one here:
http://openvswitch.org/pipermail/dev/2016-January/064445.html

I'm working on a solution.

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic