[prev in list] [next in list] [prev in thread] [next in thread] 

List:       openvswitch-discuss
Subject:    [ovs-discuss] how to get dpdk pmd's assigned per port.
From:       davidjoshuaevans () gmail ! com (David Evans)
Date:       2015-09-30 18:27:48
Message-ID: D23191F7.33842%davidjoshuaevans () gmail ! com
[Download RAW message or body]

Thank you Daniele.

I like the way the multi-queue round robin thing works,
I just need to be able to define the mask of cores for the dpdk port and
to be able to identify the primary one for managing timers or the tx queue.

I found it quite awkward to nail down a pmd lcore id to do
rte_timer_manage for each netdev.
The  Œother_config ¹ field would be ok for a start I guess.
You could set a mask for distributing the rxq ¹s and a core number for the
lcore to tie the tx / other future netdev housekeeping( rte timer
management) to.

Regards,

Dave.



On 9/30/15, 12:50 PM, "Daniele Di Proietto" <diproiettod at vmware.com> wrote:

>
>
>On 30/09/2015 04:44, "David Evans" <davidjoshuaevans at gmail.com> wrote:
>
>>Hi OVS (Ben particularly :) )
>>
>>How do i get OVS to assign ports to the PMD's that I choose?
>>
>>If i have say 6 or 12 ports and i want them distributed evenly across a
>>mask of 12 or more cores on a multi node numa system / or where i know
>>the NICs are on separate pci buses even what code do i touch to have more
>>deterministic control over this.
>>
>
>Currently each pmd thread loads the rx queues from the NICs on its NUMA
>socket.
>If you create more than one pmd thread per NUMA socket, the queues will be
>assigned in a round robin fashion.
>
>The function that does this is pmd_load_queues(). It is called in
>pmd_thread_main().
>
>We're discussing a way to provide more control for the user.
>
>
>
>>
>>Cheers,
>>
>>
>>Dave.
>>
>>
>>
>>
>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic