[prev in list] [next in list] [prev in thread] [next in thread] 

List:       libvirt-users
Subject:    [libvirt-users] NUMA revisited
From:       Patrick Meyer <libvirt () the-space ! agency>
Date:       2019-05-02 10:20:41
Message-ID: 77c5484f-32e4-3339-ec90-faf12a98cf7a () the-space ! agency
[Download RAW message or body]

Moin libvirters,

I'm looking into the current numa settings for a large-ish libvirt/qemu 
based setup and I ended up having a couple of questions:

1) Has kernel.numa_balancing completely replaced numad or is there still 
a time and place for numad when we have a modern kernel?

2) Should I pin vCPUs to numa nodes and/or use numatune at all, when 
using kernel.numa_balancing?

3) The libvirt domain xml elements for vcpu and numatune.memory have 
placement options. According to the docs setting them to auto will query 
numad for a good placements. Should I keep numad running just for this?

4) Should I still expose the numa topology via cpu.numa.cell if I use 
the auto placement for vcpu and numatune?

5) Does the cpus attribute in the cpu.numa.cell elements reference vCPU 
cores or the real physical CPU cores? Most examples reference them as 
ranges, which confuses me as on my numa hosts node0 has cores 0,2,4.. 
and node1 the others.

I'd like to benchmark a couple of different options using our production 
workloads once I actually have grasp what combinations could make any 
sense. Maybe somebody would like to share the cpu/memory/numa settings 
they ended up with and why?

Thanks a lot,
Patrick Meyer

_______________________________________________
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic