[prev in list] [next in list] [prev in thread] [next in thread] 

List:       openvswitch-dev
Subject:    [ovs-dev] vhost-user performance issue while using I350 NIC
From:       kevin.traynor () intel ! com (Traynor, Kevin)
Date:       2015-06-29 20:57:26
Message-ID: BC0FEEC7D7650749874CEC11314A88F74517DF87 () IRSMSX104 ! ger ! corp ! intel ! com
[Download RAW message or body]


> -----Original Message-----
> From: dev [mailto:dev-bounces at openvswitch.org] On Behalf Of rajeev satya
> Sent: Monday, June 29, 2015 5:54 PM
> To: dev at openvswitch.org
> Subject: [ovs-dev] vhost-user performance issue while using I350 NIC
> 
> Hi All,
>         While sending 1G bidirectional traffic of 64bytes size, I see a low
> performance for Phy-VM-Phy setup using OVS with DPDK. I assigned 1core to
> vswitchd process and also did performance tuning step for setting core
> affinity as mentioned in INSTALL.DPDK.md. But still I get around 1.1G
> throughput. When the same configuration is used for phy-phy I observe good
> performance. By increasing the cores to vswitchd also I could observe good
> performance. But I want to know if there is a possibility to get good
> performance by assigning 1core itself.
> 
> Following are the Platform and setup details:
> NOTE: Used latest ovs-master and DPDK2.0.0
> 1. Intel Xeon E5 2603 v3 (2 Sockets)
> 2. hugepagesz=1G, hugepages=8,isolcpus=1,2,3,4,5,6,7,8
> 3. Bound two I350 nics to igb_uio driver.
> 4. Ensured that the dpdk ports and the cores assigned to vswitchd are
> mapped to same socket.(included 'socket-mem 4096' also)
> 5. Brought up the OVS+DPDK as mentioned in INSTALL.DPDK.md for vhost-user
> implementation.
> 6.. Brought up VM using qemu with 4 vcpus and ran DPDK l2fwd inside it.
> 7. Used DPDK Pktgen to pump 1G bidirectional traffic of 64bytes size.
> 
> I observe that, even though 1G bidirectional traffic is pumped, the rate is
> still 1100/1100. I am really not sure why each nic is not transmitting
> beyond 550Mbps. When I use the same configuration for Phy-Phy I see the
> rate as 2000/2000.
> 
> Can you please let me know If I should make any I350 NIC specific changes
> in the code?
> All the RX/TX descriptor values of my I350 nic are set to its default
> values in my linux host. Should I do any tuning in my nic to increase
> performance?
> 
> I'm new to Openvswitch and want to learn the internals. It would be really
> helpful if you could let me know packet path in the source code for the
> Phy-VM-Phy scenario, so I can get a clear understanding and work on
> improving performance.

Ballpark, the figures you have look correct given that the E5-2603 is a
1.6 GHz part and at present code path is CPU bound. As you mentioned you
can increase throughput by adding another pmd/core. We are looking into
optimizations on the code path for this in OVS and DPDK which should
increase performance on a single core over the next months.

If you wanted you to try and tune the NIC rx/tx queue config you could
modify the code here
https://github.com/openvswitch/ovs/blob/master/lib/netdev-dpdk.c#L451-462

Other things you could try would be checking the core affinitization,
enable hyper threading and use 2 PMD's/logical cores (still 1 physical
core), disable mergeable buffers on the vhost interface.

You can follow the packet code path from here
https://github.com/openvswitch/ovs/blob/master/lib/dpif-netdev.c#L2694

> 
> Thanks in advance.
> 
> Regards,
> Rajeev.
> _______________________________________________
> dev mailing list
> dev at openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic