[prev in list] [next in list] [prev in thread] [next in thread]
List: e1000-devel
Subject: Re: [E1000-devel] FdirPballoc
From: "Lynch, Jonathan" <jonathan.lynch () thenowfactory ! com>
Date: 2011-05-18 16:43:18
Message-ID: BANLkTi=iD6_83G7M2GgGdyxdzOzsuihSTw () mail ! gmail ! com
[Download RAW message or body]
[Attachment #2 (multipart/alternative)]
Hi Alex,
It was just an observation, I was confused by the segmentation of the packet
buffer and didnt realise that when DCB was disabled that there was just one
packet buffer.
LnkSt reports Speed 5GT/s and Width x8 which seems to be good. lspci output
included below.
Regards
Jonathan
04:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit Dual Port
Network Connection (rev 01)
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
<MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 16
Region 0: Memory at fbe20000 (64-bit, non-prefetchable) [size=128K]
Region 2: I/O ports at e020 [size=32]
Region 4: Memory at fbe44000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA
PME(D0+,D1-,D2-,D3hot+,D3cold-)
Status: D0 PME-Enable- DSel=0 DScale=1 PME-
Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+ Queue=0/0
Enable-
Address: 0000000000000000 Data: 0000
Masking: 00000000 Pending: 00000000
Capabilities: [70] MSI-X: Enable+ Mask- TabSize=64
Vector table: BAR=4 offset=00000000
PBA: BAR=4 offset=00002000
Capabilities: [a0] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1
<64us
ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
MaxPayload 256 bytes, MaxReadReq 256 bytes
DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-
LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Latency L0 <1us,
L1 <8us
ClockPM- Suprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive-
BWMgmt- ABWMgmt-
Capabilities: [e0] Vital Product Data <?>
Capabilities: [100] Advanced Error Reporting <?>
Capabilities: [140] Device Serial Number 9e-25-63-ff-ff-bb-02-00
Capabilities: [150] #0e
Capabilities: [160] #10
Kernel driver in use: ixgbe
Kernel modules: ixgbe
04:00.1 0200: 8086:10fc (rev 01)
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin B routed to IRQ 17
Region 0: Memory at fbe00000 (64-bit, non-prefetchable) [size=128K]
Region 2: I/O ports at e000 [size=32]
Region 4: Memory at fbe40000 (64-bit, non-prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA
PME(D0+,D1-,D2-,D3hot+,D3cold-)
Status: D0 PME-Enable- DSel=0 DScale=1 PME-
Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+
Queue=0/0 Enable-
Address: 0000000000000000 Data: 0000
Masking: 00000000 Pending: 00000000
Capabilities: [70] MSI-X: Enable+ Mask- TabSize=64
Vector table: BAR=4 offset=00000000
PBA: BAR=4 offset=00002000
Capabilities: [a0] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s
<512ns, L1 <64us
ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+
Unsupported+
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+
FLReset-
MaxPayload 256 bytes, MaxReadReq 256 bytes
DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr-
TransPend-
LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Latency L0
<1us, L1 <8us
ClockPM- Suprise- LLActRep- BwNot-
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain-
CommClk+
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+
DLActive- BWMgmt- ABWMgmt-
Capabilities: [e0] Vital Product Data <?>
Capabilities: [100] Advanced Error Reporting <?>
Capabilities: [140] Device Serial Number 9e-25-63-ff-ff-bb-02-00
Capabilities: [150] #0e
Capabilities: [160] #10
Kernel driver in use: ixgbe
Kernel modules: ixgbe
On 18 May 2011 17:01, Alexander Duyck <alexander.h.duyck@intel.com> wrote:
> On 05/18/2011 07:18 AM, Lynch, Jonathan wrote:
>
>> Hi Alex,
>>
>> If there is only 1 RX buffer being used (DCB not enabled), when there are
>> packets dropped I should just see missed packets for mpc0 like what I see
>> below?
>> This 1 Rx buffer uses all the space available to it - up to 512Kb,
>> depending on the features enabled such as flow director?
>>
>> 0x03FA0: mpc0 (Missed Packets Count 0) 0x0723CE36
>> 0x03FA4: mpc1 (Missed Packets Count 1) 0x00000000
>> 0x03FA8: mpc2 (Missed Packets Count 2) 0x00000000
>> 0x03FAC: mpc3 (Missed Packets Count 3) 0x00000000
>> 0x03FB0: mpc4 (Missed Packets Count 4) 0x00000000
>> 0x03FB4: mpc5 (Missed Packets Count 5) 0x00000000
>> 0x03FB8: mpc6 (Missed Packets Count 6) 0x00000000
>> 0x03FBC: mpc7 (Missed Packets Count 7) 0x00000000
>>
>> According to the 82599 data sheet
>>
>> *8.2.3.23.4 Rx Missed Packets Count — RXMPC[n] (0x03FA0 + 4*n, n=0...7;
>> RC) DBU-Rx*
>> Register ‘n’ counts the number of missed packets per packet buffer ‘n’.
>> Packets are missed when the receive FIFO has insufficient space to store
>> the incoming
>> packet. This may be caused due to insufficient buffers allocated, or
>> because there is
>> insufficient bandwidth on the IO bus. Events setting this counter also set
>> the receiver
>> overrun interrupt (RXO). These registers do not increment if receive is
>> not enabled and
>> count only packets that would have been posted to the SW driver.
>>
>> Jonathan
>>
>
> I'm slighly confused by what you are asking here. With only one packet
> buffer enabled you will only see MPC increment for one FIFO since it is the
> only one in use. So if you are asking if the behaviour is normal then yes,
> the info you have above is correct.
>
> However one thing that does concern me is why you might be seeing the
> missed packet counts. Normally something like this will occur when you do
> not have sufficient PCIe bandwidth to flush the RX FIFO as packets arrive.
> Would it be possible to provide an "lspci -vvv" dump for the device. What
> we would specifically want to verify is that the link status register
> reports a 5GT/s link with a lane width of x8. This is the optimal
> configuration for allowing enough PCIe bandwidth to empty the RX FIFO.
>
> Thanks,
>
> Alex
>
------------------------------------------------------------------------------
What Every C/C++ and Fortran developer Should Know!
Read this article and learn how Intel has extended the reach of its
next-generation tools to help Windows* and Linux* C/C++ and Fortran
developers boost performance applications - including clusters.
http://p.sf.net/sfu/intel-dev2devmay
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit http://communities.intel.com/community/wired
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic