[prev in list] [next in list] [prev in thread] [next in thread] 

List:       dpdk-users
Subject:    Preliminary AWS ENA Benchmark Results on DPDK
From:       fwefew 4t4tg <7532yahoo () gmail ! com>
Date:       2022-03-28 0:03:46
Message-ID: CA+Tq66USfTvFcimMANTJ9j5m86gBYd-Ey507Bn4bZO+8xLh2+Q () mail ! gmail ! com
[Download RAW message or body]

https://github.com/rodgarrison/reinvent#benchmarks

I show two c5n.metal boxes with AWS ENA NICs running under DPDK in the same
data center running 1 TXQ and 1 RXQ transceiving 72-byte UDP packets
(32-byte payload):

- transmit around 874,410 packets/sec/queue

If the NIC's TXQ ring is not full the application code can hand off packets
to the NICs output buffer as fast as 415 ns/pkt or  some 2.4 million
packets/sec. But once the TXQ ring or HW buffers get full, throughput drops
about 3x. I will update the link shortly with the RX states. In short,
performance seems decent except when rings are full.

But since AWS ENA NICs are virtual --- they are not a bonafide NIC card
plugged into the computer's PCI bus, I'm not sure if these numbers are
good, average, or suck. Indeed as per:

https://www.amazon.com/Data-Plane-Development-Kit-DPDK/dp/0367373955/ref=sr_1_1?keywor \
ds=data+plane+development+kit&qid=1648425589&s=books&sprefix=data+plane+devel%2Cstripbooks%2C85&sr=1-1


chapter 12 (virtio) page 236 virtio is at least partially interrupt driven.
I had no idea of the details but interrupt driven surely has to under
perform a vanilla case where the CPU/NIC/PCI work in unison without all the
virtualization technology. I'm not even sure, by extension, whether the ENA
driver is really 100% poll-driven.

Any context/feedback here would be appreciated.


[Attachment #3 (text/html)]

<div dir="ltr"><a href="https://github.com/rodgarrison/reinvent#benchmarks">https://github.com/rodgarrison/reinvent#benchmarks</a><br><br>I \
show two c5n.metal boxes with AWS ENA NICs running under DPDK in the same data center \
running 1 TXQ and 1 RXQ transceiving 72-byte UDP packets (32-byte payload):<br><br>- \
transmit around  874,410 packets/sec/queue<div><br>If the  NIC&#39;s TXQ ring is not \
full the application code can hand off packets to the NICs output buffer as fast as \
415 ns/pkt or   some 2.4 million packets/sec. But once the TXQ ring or HW buffers get \
full, throughput drops about 3x. I will update the link shortly with the RX states. \
In short, performance seems decent except when rings are full.<br><br>But since AWS \
ENA NICs are virtual --- they are not a bonafide NIC card plugged into the \
computer&#39;s PCI bus, I&#39;m not sure if these numbers are good, average, or suck. \
Indeed as per:<br><br><a \
href="https://www.amazon.com/Data-Plane-Development-Kit-DPDK/dp/0367373955/ref=sr_1_1? \
keywords=data+plane+development+kit&amp;qid=1648425589&amp;s=books&amp;sprefix=data+pl \
ane+devel%2Cstripbooks%2C85&amp;sr=1-1">https://www.amazon.com/Data-Plane-Development- \
Kit-DPDK/dp/0367373955/ref=sr_1_1?keywords=data+plane+development+kit&amp;qid=16484255 \
89&amp;s=books&amp;sprefix=data+plane+devel%2Cstripbooks%2C85&amp;sr=1-1</a><br><br>chapter \
12 (virtio) page 236 virtio is at least partially interrupt driven. I had no idea of \
the details but interrupt driven surely has to under perform a vanilla case where the \
CPU/NIC/PCI work in unison without all the virtualization technology. I&#39;m not \
even sure, by extension, whether the ENA driver is really 100% \
poll-driven.</div><div><br></div><div>Any context/feedback here would be \
appreciated.</div></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic