[prev in list] [next in list] [prev in thread] [next in thread] 

List:       dpdk-users
Subject:    Re: Trouble bringing up dpdk testpmd with Mellanox ports
From:       madhukar mythri <madhukar.mythri () gmail ! com>
Date:       2022-01-27 12:50:11
Message-ID: CAAUNki3D8ONVAJwy0YBBwKYLpByqvy6HFDbyCNPeOCE6FShaNg () mail ! gmail ! com
[Download RAW message or body]

Hi,

Make-sure the Kernel drivers(mlx5) were loaded properly on the Mellonox
devices.
In DPDK-19.11, it works well, try with PCI domain and '-n' option as
follows:

./bin/testpmd -l 10-12 -n 1  -w 0000:82:00.0  --

Regards,
Madhukar.


On Thu, Jan 27, 2022 at 1:46 PM Sindhura Bandi <
sindhura.bandi@certesnetworks.com> wrote:

> Hi,
>
>
> Thank you for the response.
>
> I tried what you suggested, but with the same result.
>
>
> ##################
>
> root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 10-12  -w
> 82:00.0  -- --total-num-mbufs 1025
> ./bin/testpmd: error while loading shared libraries:
> librte_pmd_bond.so.2.1: cannot open shared object file: No such file or
> directory
> root@debian-10:~/dpdk-18.11/myinstall# export
> LD_LIBRARY_PATH=/root/dpdk-18.11/myinstall/share/dpdk/x86_64-native-linuxapp-gcc/lib
> root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 10-12  -w
> 82:00.0  -- --total-num-mbufs 1025
> EAL: Detected 24 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> testpmd: No probed ethernet devices
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=1025, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=1025, size=2176,
> socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> Done
> No commandline core given, start packet forwarding
> io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support
> enabled, MP allocation mode: native
>
>   io packet forwarding packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=0
> Press enter to exit
> ####################
>
> -Sindhu
>
> ------------------------------
> *From:* PATRICK KEROULAS <patrick.keroulas@radio-canada.ca>
> *Sent:* Monday, January 17, 2022 11:26:18 AM
> *To:* Sindhura Bandi
> *Cc:* users@dpdk.org; Venugopal Thacahappilly
> *Subject:* Re: Trouble bringing up dpdk testpmd with Mellanox ports
>
> Hello,
> Try without `--no-pci` in your testpmd command.
>
> On Sun, Jan 16, 2022 at 6:08 AM Sindhura Bandi
> <sindhura.bandi@certesnetworks.com> wrote:
> >
> > Hi,
> >
> >
> > I'm trying to bring up dpdk-testpmd application using Mellanox
> connectX-5 ports. With a custom built dpdk, testpmd is not able to detect
> the ports.
> >
> >
> > OS & Kernel:
> >
> > Linux debian-10 4.19.0-17-amd64 #1 SMP Debian 4.19.194-2 (2021-06-21)
> x86_64 GNU/Linux
> >
> > The steps followed:
> >
> > Installed MLNX_OFED_LINUX-4.9-4.0.8.0-debian10.0-x86_64
> (./mlnxofedinstall --skip-distro-check --upstream-libs --dpdk)
> > Downloaded dpdk-18.11 source, and built it after making following
> changes in config
> >
> >            CONFIG_RTE_LIBRTE_MLX5_PMD=y
> >            CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=y
> >            CONFIG_RTE_BUILD_SHARED_LIB=y
> >
> > When I run testpmd, it is not recognizing any Mellanox ports
> >
> >
> > #########
> > root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 1-3  -w 82:00.0
> --no-pci -- --total-num-mbufs 1025
> > EAL: Detected 24 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > testpmd: No probed ethernet devices
> > testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=1025, size=2176,
> socket=0
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=1025, size=2176,
> socket=1
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > Done
> > No commandline core given, start packet forwarding
> > io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support
> enabled, MP allocation mode: native
> >
> >   io packet forwarding packets/burst=32
> >   nb forwarding cores=1 - nb forwarding ports=0
> > Press enter to exit
> > ##########
> >
> > root@debian-10:~# lspci | grep Mellanox
> > 82:00.0 Ethernet controller: Mellanox Technologies MT28800 Family
> [ConnectX-5 Ex]
> > 82:00.1 Ethernet controller: Mellanox Technologies MT28800 Family
> [ConnectX-5 Ex]
> > root@debian-10:~# ibv_devinfo
> > hca_id:    mlx5_0
> >     transport:            InfiniBand (0)
> >     fw_ver:                16.28.4512
> >     node_guid:            b8ce:f603:00f2:7952
> >     sys_image_guid:            b8ce:f603:00f2:7952
> >     vendor_id:            0x02c9
> >     vendor_part_id:            4121
> >     hw_ver:                0x0
> >     board_id:            DEL0000000004
> >     phys_port_cnt:            1
> >         port:    1
> >             state:            PORT_ACTIVE (4)
> >             max_mtu:        4096 (5)
> >             active_mtu:        1024 (3)
> >             sm_lid:            0
> >             port_lid:        0
> >             port_lmc:        0x00
> >             link_layer:        Ethernet
> >
> > hca_id:    mlx5_1
> >     transport:            InfiniBand (0)
> >     fw_ver:                16.28.4512
> >     node_guid:            b8ce:f603:00f2:7953
> >     sys_image_guid:            b8ce:f603:00f2:7952
> >     vendor_id:            0x02c9
> >     vendor_part_id:            4121
> >     hw_ver:                0x0
> >     board_id:            DEL0000000004
> >     phys_port_cnt:            1
> >         port:    1
> >             state:            PORT_ACTIVE (4)
> >             max_mtu:        4096 (5)
> >             active_mtu:        1024 (3)
> >             sm_lid:            0
> >             port_lid:        0
> >             port_lmc:        0x00
> >             link_layer:        Ethernet
> >
> >
> > I'm not sure where I'm going wrong. Any hints will be much appreciated.
> >
> > Thanks,
> > Sindhu
>
>

[Attachment #3 (text/html)]

<div dir="ltr"><div>Hi,</div><div><br></div><div>Make-sure the Kernel drivers(mlx5) \
were loaded properly on the Mellonox devices.</div><div>In DPDK-19.11, it works well, \
try with PCI domain and &#39;-n&#39; option as \
follows:</div><div><br></div><div><span \
style="color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif;font-size:16px">./bin/testpmd \
-l 10-12 -n 1   -w 0000:82:00.0   --  </span><br></div><div><span \
style="color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif;font-size:16px"><br></span></div><div><span \
style="color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif;font-size:16px">Regards,</span></div><div><span \
style="color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif;font-size:16px">Madhukar.</span></div><div><br></div></div><br><div \
class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jan 27, 2022 at 1:46 PM \
Sindhura Bandi &lt;<a \
href="mailto:sindhura.bandi@certesnetworks.com">sindhura.bandi@certesnetworks.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">





<div>


<div dir="ltr">
<div id="gmail-m_-3957106316940927784x_divtagdefaultwrapper" dir="ltr" \
style="font-size:12pt;color:rgb(0,0,0);font-family:Calibri,Helvetica,sans-serif"> \
<p>Hi,</p> <p><br>
</p>
<p>Thank you for the response.<br>
</p>
<p>I tried what you suggested, but with the same result.</p>
<p><br>
</p>
<p>##################</p>
<p></p>
<div>root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 10-12   -w 82:00.0   -- \
                --total-num-mbufs 1025<br>
./bin/testpmd: error while loading shared libraries: librte_pmd_bond.so.2.1: cannot \
open shared object file: No such file or directory<br> \
root@debian-10:~/dpdk-18.11/myinstall# export \
LD_LIBRARY_PATH=/root/dpdk-18.11/myinstall/share/dpdk/x86_64-native-linuxapp-gcc/lib<br>
 root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 10-12   -w 82:00.0   -- \
                --total-num-mbufs 1025<br>
EAL: Detected 24 lcore(s)<br>
EAL: Detected 2 NUMA nodes<br>
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket<br>
EAL: No free hugepages reported in hugepages-1048576kB<br>
EAL: Probing VFIO support...<br>
testpmd: No probed ethernet devices<br>
testpmd: create a new mbuf pool &lt;mbuf_pool_socket_0&gt;: n=1025, size=2176, \
                socket=0<br>
testpmd: preferred mempool ops selected: ring_mp_mc<br>
testpmd: create a new mbuf pool &lt;mbuf_pool_socket_1&gt;: n=1025, size=2176, \
                socket=1<br>
testpmd: preferred mempool ops selected: ring_mp_mc<br>
Done<br>
No commandline core given, start packet forwarding<br>
io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP \
allocation mode: native<br> <br>
   io packet forwarding packets/burst=32<br>
   nb forwarding cores=1 - nb forwarding ports=0<br>
Press enter to exit<br>
####################</div>
<div><br>
</div>
<div>-Sindhu<br>
</div>
<p></p>
</div>
<hr style="display:inline-block;width:98%">
<div id="gmail-m_-3957106316940927784x_divRplyFwdMsg" dir="ltr"><font face="Calibri, \
sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> PATRICK KEROULAS \
&lt;<a href="mailto:patrick.keroulas@radio-canada.ca" \
target="_blank">patrick.keroulas@radio-canada.ca</a>&gt;<br> <b>Sent:</b> Monday, \
January 17, 2022 11:26:18 AM<br> <b>To:</b> Sindhura Bandi<br>
<b>Cc:</b> <a href="mailto:users@dpdk.org" target="_blank">users@dpdk.org</a>; \
Venugopal Thacahappilly<br> <b>Subject:</b> Re: Trouble bringing up dpdk testpmd with \
Mellanox ports</font> <div>  </div>
</div>
</div>
<font size="2"><span style="font-size:10pt">
<div>Hello,<br>
Try without `--no-pci` in your testpmd command.<br>
<br>
On Sun, Jan 16, 2022 at 6:08 AM Sindhura Bandi<br>
&lt;<a href="mailto:sindhura.bandi@certesnetworks.com" \
target="_blank">sindhura.bandi@certesnetworks.com</a>&gt; wrote:<br> &gt;<br>
&gt; Hi,<br>
&gt;<br>
&gt;<br>
&gt; I&#39;m trying to bring up dpdk-testpmd application using Mellanox connectX-5 \
ports. With a custom built dpdk, testpmd is not able to detect the ports.<br> \
&gt;<br> &gt;<br>
&gt; OS &amp; Kernel:<br>
&gt;<br>
&gt; Linux debian-10 4.19.0-17-amd64 #1 SMP Debian 4.19.194-2 (2021-06-21) x86_64 \
GNU/Linux<br> &gt;<br>
&gt; The steps followed:<br>
&gt;<br>
&gt; Installed MLNX_OFED_LINUX-4.9-4.0.8.0-debian10.0-x86_64 (./mlnxofedinstall \
--skip-distro-check --upstream-libs --dpdk)<br> &gt; Downloaded dpdk-18.11 source, \
and built it after making following changes in config<br> &gt;<br>
&gt;                       CONFIG_RTE_LIBRTE_MLX5_PMD=y<br>
&gt;                       CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=y<br>
&gt;                       CONFIG_RTE_BUILD_SHARED_LIB=y<br>
&gt;<br>
&gt; When I run testpmd, it is not recognizing any Mellanox ports<br>
&gt;<br>
&gt;<br>
&gt; #########<br>
&gt; root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 1-3   -w 82:00.0 \
--no-pci -- --total-num-mbufs 1025<br> &gt; EAL: Detected 24 lcore(s)<br>
&gt; EAL: Detected 2 NUMA nodes<br>
&gt; EAL: Multi-process socket /var/run/dpdk/rte/mp_socket<br>
&gt; EAL: No free hugepages reported in hugepages-1048576kB<br>
&gt; EAL: Probing VFIO support...<br>
&gt; testpmd: No probed ethernet devices<br>
&gt; testpmd: create a new mbuf pool &lt;mbuf_pool_socket_0&gt;: n=1025, size=2176, \
socket=0<br> &gt; testpmd: preferred mempool ops selected: ring_mp_mc<br>
&gt; testpmd: create a new mbuf pool &lt;mbuf_pool_socket_1&gt;: n=1025, size=2176, \
socket=1<br> &gt; testpmd: preferred mempool ops selected: ring_mp_mc<br>
&gt; Done<br>
&gt; No commandline core given, start packet forwarding<br>
&gt; io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP \
allocation mode: native<br> &gt;<br>
&gt;     io packet forwarding packets/burst=32<br>
&gt;     nb forwarding cores=1 - nb forwarding ports=0<br>
&gt; Press enter to exit<br>
&gt; ##########<br>
&gt;<br>
&gt; root@debian-10:~# lspci | grep Mellanox<br>
&gt; 82:00.0 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 \
Ex]<br> &gt; 82:00.1 Ethernet controller: Mellanox Technologies MT28800 Family \
[ConnectX-5 Ex]<br> &gt; root@debian-10:~# ibv_devinfo<br>
&gt; hca_id:       mlx5_0<br>
&gt;         transport:                       InfiniBand (0)<br>
&gt;         fw_ver:                               16.28.4512<br>
&gt;         node_guid:                       b8ce:f603:00f2:7952<br>
&gt;         sys_image_guid:                       b8ce:f603:00f2:7952<br>
&gt;         vendor_id:                       0x02c9<br>
&gt;         vendor_part_id:                       4121<br>
&gt;         hw_ver:                               0x0<br>
&gt;         board_id:                       DEL0000000004<br>
&gt;         phys_port_cnt:                       1<br>
&gt;                 port:       1<br>
&gt;                         state:                       PORT_ACTIVE (4)<br>
&gt;                         max_mtu:               4096 (5)<br>
&gt;                         active_mtu:               1024 (3)<br>
&gt;                         sm_lid:                       0<br>
&gt;                         port_lid:               0<br>
&gt;                         port_lmc:               0x00<br>
&gt;                         link_layer:               Ethernet<br>
&gt;<br>
&gt; hca_id:       mlx5_1<br>
&gt;         transport:                       InfiniBand (0)<br>
&gt;         fw_ver:                               16.28.4512<br>
&gt;         node_guid:                       b8ce:f603:00f2:7953<br>
&gt;         sys_image_guid:                       b8ce:f603:00f2:7952<br>
&gt;         vendor_id:                       0x02c9<br>
&gt;         vendor_part_id:                       4121<br>
&gt;         hw_ver:                               0x0<br>
&gt;         board_id:                       DEL0000000004<br>
&gt;         phys_port_cnt:                       1<br>
&gt;                 port:       1<br>
&gt;                         state:                       PORT_ACTIVE (4)<br>
&gt;                         max_mtu:               4096 (5)<br>
&gt;                         active_mtu:               1024 (3)<br>
&gt;                         sm_lid:                       0<br>
&gt;                         port_lid:               0<br>
&gt;                         port_lmc:               0x00<br>
&gt;                         link_layer:               Ethernet<br>
&gt;<br>
&gt;<br>
&gt; I&#39;m not sure where I&#39;m going wrong. Any hints will be much \
appreciated.<br> &gt;<br>
&gt; Thanks,<br>
&gt; Sindhu<br>
<br>
</div>
</span></font>
</div>

</blockquote></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic