[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kvm
Subject:    [ kvm-Bugs-3022896 ] bad network performance with 10Gbit
From:       "SourceForge.net" <noreply () sourceforge ! net>
Date:       2010-06-30 11:30:01
Message-ID: E1OTvUP-00076Q-4L () sfs-web-7 ! v29 ! ch3 ! sourceforge ! com
[Download RAW message or body]

Bugs item #3022896, was opened at 2010-06-29 18:39
Message generated for change (Comment added) made by jessorensen
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=3022896&group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: intel
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: zerocoolx ()
Assigned to: Nobody/Anonymous (nobody)
Summary: bad network performance with 10Gbit

Initial Comment:
Hello,
I have trouble with the network performance inside my virtual machines.

My KVM-Host machine is connected to a 10Gbit Network. All interfaces are configured \
to a mtu of 4132. On this host I have no problems and I can use the full bandwidth:

CPU_Info:
2x Intel Xeon X5570
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 \
clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc \
arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni dtes64 monitor \
ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm ida \
tpr_shadow vnmi flexpriority ept vpid

KVM Version:
QEMU PC emulator version 0.12.3 (qemu-kvm-0.12.3), Copyright (c) 2003-2008 Fabrice \
Bellard 0.12.3+noroms-0ubuntu9

KVM Host Kernel:
2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Linux

KVM Host OS:
Ubuntu 10.04 LTS
Codename: lucid

KVM Guest Kernel:
2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Linux

KVM Guest OS:
Ubuntu 10.04 LTS
Codename: lucid


# iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P4
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-60.0 sec 18.8 GBytes 2.69 Gbits/sec
[ 5] 0.0-60.0 sec 15.0 GBytes 2.14 Gbits/sec
[ 6] 0.0-60.0 sec 19.3 GBytes 2.76 Gbits/sec
[ 3] 0.0-60.0 sec 15.1 GBytes 2.16 Gbits/sec
[SUM] 0.0-60.0 sec 68.1 GBytes 9.75 Gbits/sec


Inside a virtual machine don't reach this result:

# iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P 4
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 5.65 GBytes 808 Mbits/sec
[ 4] 0.0-60.0 sec 5.52 GBytes 790 Mbits/sec
[ 5] 0.0-60.0 sec 5.66 GBytes 811 Mbits/sec
[ 6] 0.0-60.0 sec 5.70 GBytes 816 Mbits/sec
[SUM] 0.0-60.0 sec 22.5 GBytes 3.23 Gbits/sec

I only can use 3,23Gbits of 10Gbits. I use the virtio driver for all of my vms, but I \
have also tried to use the e1000 nic device instead.

With starting the iperf performance test on multiple vms simultaneously I can use the \
full bandwidth of the kvm host's interface. But only one vm can't use the full \
bandwith. Is this a known limitation, or can I improve this performance?

Does anyone have an idea how I can improve my network performance? It's very \
important, because I want to use the network interface to boot all vms via AOE (ATA \
over Ethernet).

If I mount a harddisk via AOE inside a vm I get only this results:
Write |CPU |Rewrite |CPU |Read |CPU
102440 |10 |51343 |5 |104249 |3

On the KVM Host I get those results on a mouted AOE Device:
Write |CPU |Rewrite |CPU |Read |CPU
205597 |19 |139118 |11 |391316 |11

If I mount the AOE Device directly on the kvm-host and put a virtual harddisk-file in \
it I got the following results inside a vm using this harddisk-file: Write |CPU \
|Rewrite |CPU |Read |CPU 175140 |12 |136113 |24 |599989 |29

----------------------------------------------------------------------

> Comment By: Jes Sorensen (jessorensen)
Date: 2010-06-30 13:30

Message:
If you want more speed you are going to need to look at vhost_net. However
support for this isn't in
your 2.6.32 kernel, you need something more recent, plus a matching
qemu-kvm to go with it.

This isn't really a bug, it would be better if you had a look a vhort_net
and took it to a mailing list discussion.

Cheers,
Jes


----------------------------------------------------------------------

Comment By: Jes Sorensen (jessorensen)
Date: 2010-06-29 18:52

Message:
What kind of CPU load are you seeing when running at that kind of rates?

There has been some work in the virtio-ring handling code that might
improve the situation slightly, but I don't think that is in the Ubuntu
kernel Getting 3.2Gbps from within a guest isn't actually bad. The packet
rates over 10GigE are insane.

You may want to look at the virtio ring sizes, it could be that it isn't
big enough for that kind of packet rate too.

Cheers,
Jes


----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=893831&aid=3022896&group_id=180599
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic