Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

Queue issue about XL710

$
0
0

Hi,

 

I met some issues with XL710 NIC on FreeBSD10.1.

The NIC is a dual-port 40G card with the latest NVM image. CPU is 2690 V2 with hyperthreading to 40 logical cores.

Part number as XL710-QDA2 932587.

Firmware info:  f5.0 a1.5 n05.02 e80002285

Driver info: 1.4.8.

 

Issue1:

Within 20 queues only 16 of them are receiving packets (que0 - que15),

Que16-que19 are generating interrupt but no packet available, status/error register value are all 0.

 

AN# sysctl dev.ixl.2 | grep rx_packet

dev.ixl.2.pf.que0.rx_packets: 8805610

dev.ixl.2.pf.que1.rx_packets: 9348787

dev.ixl.2.pf.que2.rx_packets: 8505388

dev.ixl.2.pf.que3.rx_packets: 9928934

dev.ixl.2.pf.que4.rx_packets: 3879181

dev.ixl.2.pf.que5.rx_packets: 5579101

dev.ixl.2.pf.que6.rx_packets: 4322241

dev.ixl.2.pf.que7.rx_packets: 5788389

dev.ixl.2.pf.que8.rx_packets: 5575891

dev.ixl.2.pf.que9.rx_packets: 3743182

dev.ixl.2.pf.que10.rx_packets: 4127868

dev.ixl.2.pf.que11.rx_packets: 3878214

dev.ixl.2.pf.que12.rx_packets: 4013070

dev.ixl.2.pf.que13.rx_packets: 5146686

dev.ixl.2.pf.que14.rx_packets: 4024729

dev.ixl.2.pf.que15.rx_packets: 4736103

dev.ixl.2.pf.que16.rx_packets: 0

dev.ixl.2.pf.que17.rx_packets: 0

dev.ixl.2.pf.que18.rx_packets: 0

dev.ixl.2.pf.que19.rx_packets: 0


Issue2:

The NIC may hang under heavy stress, like putting 60Gbps traffic into it.

It may stop generating interrupt in such case.

 

Could you please help check the above issues?

Great thanks.

 

Regards,

Jingxun


Can XL710 VLAN filters limitation be overcome?

$
0
0

We have an app running a VF on the Fortville XL710. According to the data sheet, the maximum number of VLAN filters the XL710 VF can set is 8. Unfortunately our app requires lots more than that, so I'm wondering if there is a way to cause the app running the VF to receive all traffic directed to its MAC address ie non-tagged and tagged traffic? If not, is there any other way to have a large number of VLANs in the VF?

XL710 poll-mode fix in PF driver incomplete?

$
0
0

We have a polling XL710 VF driver and have found the appropriate poll-mode workaround in the DPDK. We are however not using the DPDK and are relying on the accompanying fix made to the latest Intel PF Linux drivers eg  version 1.3.49. However this fix does not work and we believe it is incomplete. The part we are referring to involves the clearing of the DIS_AUTOMASK_N flag in the GLINT_CTL register. The code in the above release (and earlier ones) is: (i40e_virtchnl_pf.c: 344)

    if ((vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RX_POLLING) &&

        (vector_id == 0)) {

        reg = rd32(hw, I40E_GLINT_CTL);

        if (!(reg & I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK)) {

            reg |= I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK;

            wr32(hw, I40E_GLINT_CTL, reg);

        }

We believe this should say:

    if ((vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RX_POLLING)

       && (vector_id == 1) {

       reg = rd32(hw, I40E_GLINT_CTL);

        if (!(reg & I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK)) {

          reg |= I40E_GLINT_CTL_DIS_AUTOMASK_VF0_MASK |

                    I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK);

           wr32(hw, I40E_GLINT_CTL, reg);

 

        }

    }

With the above changes the fix then works.

The addition of the I40E_GLINT_CTL_DIS_AUTOMASK_N_MASK is as per the datasheet S 8.3.3.1.4.2.

The test for vector_id == 1 is because the default MSIX vector is 1. However there is a good argument for removing this test altogether since the vector involved depends on the VF implementation. Note that the fix in the DPDK eliminates this test.

 

We would appreciate it if you could verify the above and make changes to the released PF driver.

VLAN and Windows 10 intel pro 1000

$
0
0

Hi

 

just upgraded to windows 10.

 

There is a Intel pro 1000 and in windows 7 after installing Advanced Network services from Intel the VLAN feature was very helpful

 

after upgrading to Windows 10 the  VLANS are no longer available.

 

When attempting to install Advanced Network services from Intel,  the following error is displayed


there is an issue with Microsoft* windows* 10 the prevents the Intel(R) Advanced Network Services from working correctly


where can the correct drivers be found, searches have been unsuccessful thus far.


Any help appreciated.


Chris

i have integrated graphics can i just add an extra card.

$
0
0

Okay i have a problem here, i would like to upgrade my graphics card since it kinda sucks. but im not sure how or if its even possible here are my important specs

 

Graphics card - Intel HD530 (Integrated in processor)

 

Processor      - Intel Core i3-6100 3,70GHz

 

Motherboard  - ASUS H110M-K D3

 

Power           - FSP 500W 60HHN 80+ Bronze  

 

I have opened my desktop and it looked like there was room for a new card, but i asked someone and he said it wouldn't work.

Too my knowledge if there is room, there is power and there is the software. i should be able. if i can which would you recommend, i have a decent budget about 1000 dollars but please keep it about 200-300 dollars. otherwise i will just sel the thing and buy a new one.

Packet drop Issues - XL710 Rev 02/B1

$
0
0

Hi,

  Recently in one of the customer field we observed packet drop with Intel XL710 40GbE NIC.  Is there any know packet drop/issues with following XL710 Rev02/B1 NIC?

We saw packet drop even though we haven't utilized link capacity 100%. Hardly 50% of link capacity utilized when we observe the packet drop issue.

 

05:00.0 Ethernet controller: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02)

05:00.1 Ethernet controller: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02)

 

We have APP which is developed by using DPDK 2.1 version and used XL710 NIC for packet reception/transmission. 

Here is the high-level view how our APP RX and TX path connected Router which is immediate node for our APP.

 

     APP  <== RX Path======   

                                                      Router

     APP  ====TX Path======>

 

Unfortunately there is no way to measure the actual error packet statistics at APP side as DPDK 2.1 doesn't support the NIC statistics for driver i40e.

 

Please help us if there is any know packet drop issues with even XL710 B1 seriers. Or some other fine tune tuning required in NIC level.

 

Thanks in advance for your help.

 

Regards,

Suresh.

Disabling VLAN stripping on 82599 card when using SR-IOV

$
0
0

I have 2 VMs and each has a VF assigned from the same card.

The VMs have installed some apps that communicate with tagged(8021Q) traffic between them. Eg: Client App - VM - PCI - VM - Server App

The issue is that although tagged traffic exists one VM, it will arrive at the other stripped of the tag.

 

Is there a way to disable this behaviour? Even by recompiling the driver?

 

Thanks,

Silviu

 

 

modinfo ixgbe | grep ^version

version:        4.3.13

I211/I217-V Windows 10 LACP teaming fails

$
0
0

Hello,

 

after the update to Windows 10 (x64, Build 10240) the creation of a teaming group (static or IEEE802.3ad) with a I211+I217-V NIC fails.

 

Drivers have been upgraded to the latest version available and multiple reinstallations with reboots din't help either. Whenever the group creation wizzard is used and a groupname (several tried), the adapters and LACP have been selected, a Windows pop-up appears to tell me group creation has failed.

However the Windows Device Manager shows a newly created "Intel Advanced Network Services Virtual Adapter", so some kind of configuration seems to get done.

Using Windows 7 SP1 x64 the exact same setup worked flawlessly for months, so Win10/the driver are the likely culprit.

 

Is anyone experiencing similar problems and/or is this a known bug? Feedback on this issue is greatly appreciated.

 

Thanks in advance!

 

Kind regards,

Famaku


Three x540-t2 nics

$
0
0

I have two Dell r720 VMware hosts, and a Thecus n12000 Pro NAS. I just put an x540-t2 in each. They are connected through a Netgear ProSafe XS708e switch. There are no other devices on the network.

 

In all three machines, the port nearest the PCI is running 10Gb, and the other port is at 1Gb. The lights on the cards, and on the switch, all agree with this. One yellow, one green, for each endpoint.

 

How do I get both ports on these cards to run 10Gb?

Bad impressions of the network card after driver update.

$
0
0

I use original network card Intel 82574L (EXPI9301CTBLK). After the upgrade, with the built-in "Windows 7 x64" driver 11.0.5.22 to the last 12.7.28.0 there are delays in downloading web pages. There is a general slowdown. Why are there new drivers slow down? After roll back to version 11.0.5.22 everything becomes normal.

 

PS And in the driver confused modes FullDuplex (Half) and HalfDuplex (Full).

igb + I350-T2 Offline Test Fails with Linux 2.6.33

$
0
0

I am evaluating the Ethernet Server Adapter I350-T2 to replace our previous end of life card.  I have not had any issues with the card during my operational tests but when I execute the cards offline tests the network interface goes offline and requires the network service to be restarted.  My concern that there is some real instability between my hardware/software and the I350 that could manifest in failures over prolonged use.

 

Here are my test results and kernel log data.

 

$ sudo ethtool --test eth2 offline

Cannot get strings: No such device

 

2016-03-03 11:04:34.833 localhost kernel: igb 0000:01:00.0: offline testing starting

2016-03-03 11:04:34.833 localhost kernel: igb 0000:01:00.0: eth2: igb: eth2 NIC Link is Down

2016-03-03 11:04:36.508 localhost kernel: igb 0000:01:00.0: eth2: igb: eth2 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None

2016-03-03 11:04:36.943 localhost avahi-daemon[948]:  Interface eth2.IPv4 no longer relevant for mDNS.

2016-03-03 11:04:36.943 localhost avahi-daemon[948]:  Leaving mDNS multicast group on interface eth2.IPv4 with address 192.168.50.2.

2016-03-03 11:04:36.944 localhost kernel: igb 0000:01:00.0: eth2: PCIe link lost, device now detached

2016-03-03 11:04:36.944 localhost kernel: igb 0000:01:00.0: pattern test reg 002C failed: got 0x0000FFFF expected 0x00005A5A

2016-03-03 11:04:38.621 localhost kernel: igb 0000:01:00.0: testing shared interrupt

2016-03-03 11:04:40.315 localhost kernel: igb 0000:01:00.0: Cannot do PHY loopback test when SoL/IDER is active.

2016-03-03 11:04:45.155 localhost avahi-daemon[948]:  Withdrawing address record for fe80::a236:9fff:XXXX:XXXX on eth2.

2016-03-03 11:04:45.155 localhost avahi-daemon[948]:  Withdrawing address record for 192.168.50.2 on eth2.

2016-03-03 11:04:45.155 localhost ntpd[2499]:  Deleting interface #8 eth2, fe80::a236:9fff:XXXX:XXXX#123, interface stats: received=0, sent=0, dropped=0, active_time=6201 secs

2016-03-03 11:04:45.155 localhost ntpd[2499]:  Deleting interface #4 eth2, 192.168.50.2#123, interface stats: received=0, sent=0, dropped=0, active_time=6201 secs

 

Why is this offline test failing? Should I even worry that this test fails? Does anybody have some tips as to what I should change/fix/do to get the test to pass?

 

Thanks,

Stephan

 

 

Some background details:

 

OS: Linux Fedora 13

Motherboard:  AIMB-780 w/ Intel Core i5-660

 

$ sudo ethtool -i eth2

driver: igb

version: 5.3.3.5

firmware-version: 1.63, 0x80000cbb

bus-info: 0000:01:00.0

 

$uname -r

2.6.33.5-112.fc13.i686.PAE

 

$ sudo lspci -s 01:00  -nnvvvk

01:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)

  Subsystem: Intel Corporation Ethernet Server Adapter I350-T2 [8086:0002]

  Physical Slot: 128

  Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+

  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

  Latency: 0, Cache Line Size: 32 bytes

  Interrupt: pin A routed to IRQ 16

  Region 0: Memory at fbd00000 (32-bit, non-prefetchable) [size=1M]

  Region 3: Memory at fbcfc000 (32-bit, non-prefetchable) [size=16K]

  Capabilities: [40] Power Management version 3

  Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)

  Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-

  Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+

  Address: 0000000000000000  Data: 0000

  Masking: 00000000  Pending: 00000000

  Capabilities: [70] MSI-X: Enable+ Count=10 Masked-

  Vector table: BAR=3 offset=00000000

  PBA: BAR=3 offset=00002000

  Capabilities: [a0] Express (v2) Endpoint, MSI 00

  DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us

  ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+

  DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+

  RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-

  MaxPayload 128 bytes, MaxReadReq 512 bytes

  DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ TransPend-

  LnkCap: Port #0, Speed 5GT/s, Width x4, ASPM L0s L1, Latency L0 unlimited, L1 <32us

  ClockPM- Surprise- LLActRep- BwNot-

  LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes Disabled- Retrain- CommClk+

  ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-

  LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

  DevCap2: Completion Timeout: Range ABCD, TimeoutDis+

  DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-

  LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB

  Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-

  Compliance De-emphasis: -6dB

  LnkSta2: Current De-emphasis Level: -6dB

  Capabilities: [100 v2] Advanced Error Reporting

  UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-

  CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

  CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

  AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-

  Capabilities: [140 v1] Device Serial Number a0-36-9f-ff-ff-87-06-7c

  Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)

  ARICap: MFVC- ACS-, Next Function: 1

  ARICtl: MFVC- ACS-, Function Group: 0

  Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)

  IOVCap: Migration-, Interrupt Message Number: 000

  IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy-

  IOVSta: Migration-

  Initial VFs: 8, Total VFs: 8, Number of VFs: 8, Function Dependency Link: 00

  VF offset: 384, stride: 4, Device ID: 1520

  Supported Page Size: 00000553, System Page Size: 00000001

  Region 0: Memory at 00000000c0000000 (64-bit, prefetchable)

  Region 3: Memory at 00000000c0020000 (64-bit, prefetchable)

  VF Migration: offset: 00000000, BIR: 0

  Capabilities: [1a0 v1] #17

  Capabilities: [1c0 v1] #18

  Capabilities: [1d0 v1] Access Control Services

  ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  Kernel driver in use: igb

  Kernel modules: igb

 

 

01:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)

  Subsystem: Intel Corporation Ethernet Server Adapter I350-T2 [8086:0002]

  Physical Slot: 128

  Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+

  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

  Latency: 0, Cache Line Size: 32 bytes

  Interrupt: pin B routed to IRQ 17

  Region 0: Memory at fbb00000 (32-bit, non-prefetchable) [size=1M]

  Region 3: Memory at fbcf8000 (32-bit, non-prefetchable) [size=16K]

  Capabilities: [40] Power Management version 3

  Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)

  Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-

  Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+

  Address: 0000000000000000  Data: 0000

  Masking: 00000000  Pending: 00000000

  Capabilities: [70] MSI-X: Enable+ Count=10 Masked-

  Vector table: BAR=3 offset=00000000

  PBA: BAR=3 offset=00002000

  Capabilities: [a0] Express (v2) Endpoint, MSI 00

  DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us

  ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+

  DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+

  RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-

  MaxPayload 128 bytes, MaxReadReq 512 bytes

  DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ TransPend-

  LnkCap: Port #0, Speed 5GT/s, Width x4, ASPM L0s L1, Latency L0 unlimited, L1 <32us

  ClockPM- Surprise- LLActRep- BwNot-

  LnkCtl: ASPM L0s L1 Enabled; RCB 64 bytes Disabled- Retrain- CommClk+

  ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-

  LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

  DevCap2: Completion Timeout: Range ABCD, TimeoutDis+

  DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-

  LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB

  Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-

  Compliance De-emphasis: -6dB

  LnkSta2: Current De-emphasis Level: -6dB

  Capabilities: [100 v2] Advanced Error Reporting

  UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-

  CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

  CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

  AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-

  Capabilities: [140 v1] Device Serial Number a0-36-9f-ff-ff-87-06-7c

  Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)

  ARICap: MFVC- ACS-, Next Function: 0

  ARICtl: MFVC- ACS-, Function Group: 0

  Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)

  IOVCap: Migration-, Interrupt Message Number: 000

  IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy-

  IOVSta: Migration-

  Initial VFs: 8, Total VFs: 8, Number of VFs: 8, Function Dependency Link: 01

  VF offset: 384, stride: 4, Device ID: 1520

  Supported Page Size: 00000553, System Page Size: 00000001

  Region 0: Memory at 00000000c0040000 (64-bit, prefetchable)

  Region 3: Memory at 00000000c0060000 (64-bit, prefetchable)

  VF Migration: offset: 00000000, BIR: 0

  Capabilities: [1a0 v1] #17

  Capabilities: [1d0 v1] Access Control Services

  ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  Kernel driver in use: igb

  Kernel modules: igb

 

$ modinfo igb

filename:       /lib/modules/2.6.33.5-112.fc13.i686.PAE/kernel/drivers/net/igb/igb.ko

version:        5.3.3.5

license:        GPL

description:    Intel(R) Gigabit Ethernet Network Driver

author:         Intel Corporation, <e1000-devel@lists.sourceforge.net>

srcversion:     F03F0C4A4602E651FFBECFA

alias:          pci:v00008086d000010D6sv*sd*bc*sc*i*

alias:          pci:v00008086d000010A9sv*sd*bc*sc*i*

alias:          pci:v00008086d000010A7sv*sd*bc*sc*i*

alias:          pci:v00008086d000010E8sv*sd*bc*sc*i*

alias:          pci:v00008086d00001526sv*sd*bc*sc*i*

alias:          pci:v00008086d0000150Dsv*sd*bc*sc*i*

alias:          pci:v00008086d000010E7sv*sd*bc*sc*i*

alias:          pci:v00008086d000010E6sv*sd*bc*sc*i*

alias:          pci:v00008086d00001518sv*sd*bc*sc*i*

alias:          pci:v00008086d0000150Asv*sd*bc*sc*i*

alias:          pci:v00008086d000010C9sv*sd*bc*sc*i*

alias:          pci:v00008086d00000440sv*sd*bc*sc*i*

alias:          pci:v00008086d0000043Csv*sd*bc*sc*i*

alias:          pci:v00008086d0000043Asv*sd*bc*sc*i*

alias:          pci:v00008086d00000438sv*sd*bc*sc*i*

alias:          pci:v00008086d00001516sv*sd*bc*sc*i*

alias:          pci:v00008086d00001511sv*sd*bc*sc*i*

alias:          pci:v00008086d00001510sv*sd*bc*sc*i*

alias:          pci:v00008086d00001527sv*sd*bc*sc*i*

alias:          pci:v00008086d0000150Fsv*sd*bc*sc*i*

alias:          pci:v00008086d0000150Esv*sd*bc*sc*i*

alias:          pci:v00008086d00001524sv*sd*bc*sc*i*

alias:          pci:v00008086d00001523sv*sd*bc*sc*i*

alias:          pci:v00008086d00001522sv*sd*bc*sc*i*

alias:          pci:v00008086d00001521sv*sd*bc*sc*i*

alias:          pci:v00008086d00001539sv*sd*bc*sc*i*

alias:          pci:v00008086d0000157Csv*sd*bc*sc*i*

alias:          pci:v00008086d0000157Bsv*sd*bc*sc*i*

alias:          pci:v00008086d00001538sv*sd*bc*sc*i*

alias:          pci:v00008086d00001537sv*sd*bc*sc*i*

alias:          pci:v00008086d00001536sv*sd*bc*sc*i*

alias:          pci:v00008086d00001533sv*sd*bc*sc*i*

alias:          pci:v00008086d00001F45sv*sd*bc*sc*i*

alias:          pci:v00008086d00001F41sv*sd*bc*sc*i*

alias:          pci:v00008086d00001F40sv*sd*bc*sc*i*

depends:        dca

vermagic:       2.6.33.5-112.fc13.i686.PAE SMP mod_unload 686

parm:           InterruptThrottleRate:Maximum interrupts per second, per vector, (max 100000), default 3=adaptive (array of int)

parm:           IntMode:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default 2 (array of int)

parm:           Node:set the starting node to allocate memory on, default -1 (array of int)

parm:           LLIPort:Low Latency Interrupt TCP Port (0-65535), default 0=off (array of int)

parm:           LLIPush:Low Latency Interrupt on TCP Push flag (0,1), default 0=off (array of int)

parm:           LLISize:Low Latency Interrupt on Packet Size (0-1500), default 0=off (array of int)

parm:           RSS:Number of Receive-Side Scaling Descriptor Queues (0-8), default 1, 0=number of cpus (array of int)

parm:           VMDQ:Number of Virtual Machine Device Queues: 0-1 = disable, 2-8 enable, default 0 (array of int)

parm:           max_vfs:Number of Virtual Functions: 0 = disable, 1-7 enable, default 0 (array of int)

parm:           MDD:Malicious Driver Detection (0/1), default 1 = enabled. Only available when max_vfs is greater than 0 (array of int)

parm:           QueuePairs:Enable Tx/Rx queue pairs for interrupt handling (0,1), default 1=on (array of int)

parm:           EEE:Enable/disable on parts that support the feature (array of int)

parm:           DMAC:Disable or set latency for DMA Coalescing ((0=off, 1000-10000(msec), 250, 500 (usec)) (array of int)

parm:           LRO:Large Receive Offload (0,1), default 0=off (array of int)

parm:           debug:Debug level (0=none, ..., 16=all) (int)

Intel ® Ethernet Converged Network Adapter X520-DA2

$
0
0

Hi,I havean Intelnetwork adapterEthernetServerAdapterX520-DA2,

you knowif it iscompatible withNexenta CE version 3.1.6 FP3

I tried toadd itbutdo not see itfrom the availableadapters.

 

Withthe lspci-vcommanddisplays the following information. They can be useful?
Thank you

Paolo

 

07:00.0 Ethernet controller: Intel Corporation Unknown device 1572 (rev 01)

        Subsystem: Intel Corporation Unknown device 0007

        Flags: bus master, fast devsel, latency 0, IRQ 10

        Memory at f8000000 (64-bit, prefetchable)

        Memory at f7ff8000 (64-bit, prefetchable)

        Expansion ROM at fbb00000 [disabled]

        Capabilities: [40] Power Management version 3

        Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+ Queue=0/0 Enable-

        Capabilities: [70] MSI-X: Enable- Mask- TabSize=129

        Capabilities: [a0] Express Endpoint IRQ 0

        Capabilities: [e0] Vital Product Data

 

07:00.1 Ethernet controller: Intel Corporation Unknown device 1572 (rev 01)

        Subsystem: Intel Corporation Unknown device 0000

        Flags: bus master, fast devsel, latency 0, IRQ 10

        Memory at f9800000 (64-bit, prefetchable)

        Memory at f97f8000 (64-bit, prefetchable)

        Expansion ROM at fbb80000 [disabled]

        Capabilities: [40] Power Management version 3

        Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+ Queue=0/0 Enable-

        Capabilities: [70] MSI-X: Enable- Mask- TabSize=129

        Capabilities: [a0] Express Endpoint IRQ 0

        Capabilities: [e0] Vital Product Data

SR-IOV with ixgbe - Disabling Rx VLAN filter

$
0
0

Hi All,

We trying to disable VLAN filter on Intel Corporation 82599ES 10-Gigabit Ethernet controller using ethtool without modifying the driver code.

We can able to disable successfully in the following OS and the ixgbe driver version.

Red Hat Enterprise Linux Server release 7.1 (Maipo)

Kernel : 3.10.0-229.20.1.el7.x86_64

Ixgbe driver version : 4.0.1-k-rh7.1


[root@compute05 src]#ethtool -k enp9s0f0

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off

receive-hashing: on

highdma: on [fixed]

rx-vlan-filter: off

vlan-challenged: off [fixed]

 

[root@compute05 src]#ethtool -K enp9s0f0 rx-vlan-filter off

 

 

But we cant able to do the same on the new ixgbe driver version. We observed rx-vlan-filter as fixed.

Red Hat Enterprise Linux Server release 7.1 (Maipo)

Kernel : 3.10.0-229.20.1.el7.x86_64

Ixgbe driver  version:        4.3.13

 

[root@compute05 src]#ethtool -k  enp9s0f0

Features for enp9s0f0:

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off

receive-hashing: on

highdma: on [fixed]

rx-vlan-filter: on [fixed]

vlan-challenged: off [fixed]

 

[root@compute05 src]# ethtool -K  enp9s0f0 rx-vlan-filter off

Could not change any device features

 

Any idea how to overcome this issue?

 

If any more information needed please let me know.

 

Thanks in advance.

XL710-QDA2 NIC failed to link

$
0
0

All:

    I am trying to get a dual 40G Xl710-QDA2 interface up with i40e.ko to drive them up,but failed.

    I found some use message like that:

[root@linux src]# dmesg |tail

[75592.003591] i40e 0000:04:00.1: irq 200 for MSI/MSI-X

[75592.127531] i40e 0000:04:00.1: irq 201 for MSI/MSI-X

[75592.251468] i40e 0000:04:00.1: irq 202 for MSI/MSI-X

[75592.362414] i40e 0000:04:00.1: irq 203 for MSI/MSI-X

[75592.486353] i40e 0000:04:00.1: irq 204 for MSI/MSI-X

[75592.789203] i40e 0000:04:00.1: PCI-Express: Speed 8.0GT/s Width x8

[75592.966116] i40e 0000:04:00.1: Features: PF-id[1] VSIs: 66 QP: 24 RX: 1BUF RSS FD_ATR FD_SB NTUPLE VxLAN NVGRE PTP VEPA

[75617.228339] i40e 0000:04:00.0 eth6: the driver failed to link because an unqualified module was detected.

[75617.343091] 8021q: adding VLAN 0 to HW filter on device eth6

[75617.410885] i40e 0000:04:00.0 eth6: adding 68:05:ca:32:45:e8 vid=0

 

[root@linux src]# lspci |grep Eth

01:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe

01:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe

02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe

02:00.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet PCIe

04:00.0 Ethernet controller: Intel Corporation Device 1583 (rev 01)

04:00.1 Ethernet controller: Intel Corporation Device 1583 (rev 01)

05:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)

05:00.1 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)

 

[root@linux src]# ethtool -i eth6

driver: i40e

version: 1.4.25

firmware-version: 4.42 0x80001932 0.0.0

bus-info: 0000:04:00.0

 

[root@linux src]# uname -a

Linux xgw-6 3.10.10-DPDK-1-0-0-0 #4 SMP Mon Mar 7 19:07:38 CST 2016 x86_64 x86_64 x86_64 GNU/Linux

 

QSFP+ Fiber Module is Gigalight GQS-MD0400-007C AOC Cable 7M.

 

So why can not link up with this QSFP+?

ixgbe driver does not support MAC address change

$
0
0

My ESXi 5.5 system has Intel X540-t2 installed, and using ixgbe driver version 3.21.

 

In the driver code, I can see the following lines, which shows it does not support MAC address change in ESXi 5.5, because if vfinfo[vf].pf_set_mac=true, when guest OS change its MAC, the ixgbe driver won't change it in vfinfo accordingly and will report error: VF attempted to set a new MAC address but it already has an administratively set MAC address

 

int ixgbe_ndo_set_vf_mac(struct net_device *netdev, int vf, u8 *mac)

{

  s32 retval = 0;

  struct ixgbe_adapter *adapter = netdev_priv(netdev);

  if (!is_valid_ether_addr(mac) || (vf >= adapter->num_vfs))

      return -EINVAL;

  dev_info(pci_dev_to_dev(adapter->pdev), "setting MAC %pM on VF %d\n", mac, vf);

  dev_info(pci_dev_to_dev(adapter->pdev), "Reload the VF driver to make this change effective.\n");

  retval = ixgbe_set_vf_mac(adapter, vf, mac);

  if (retval >= 0) {

      /* pf_set_mac is used in ESX5.1 and base driver but not in ESX5.5 */

      adapter->vfinfo[vf].pf_set_mac = true;

      if (test_bit(__IXGBE_DOWN, &adapter->state)) {

          dev_warn(pci_dev_to_dev(adapter->pdev), "The VF MAC address has been set, but the PF device is not up.\n");

          dev_warn(pci_dev_to_dev(adapter->pdev), "Bring the PF device up before attempting to use the VF device.\n");

      }

  } else {

      dev_warn(pci_dev_to_dev(adapter->pdev), "The VF MAC address was NOT set due to invalid or duplicate MAC address.\n");

  }

  return retval;

}

 

static int ixgbe_set_vf_mac_addr(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)

{

  u8 *new_mac = ((u8 *)(&msgbuf[1]));

 

  if (!is_valid_ether_addr(new_mac)) {

      e_warn(drv, "VF %d attempted to set invalid mac\n", vf);

      return -1;

  }

 

  if (adapter->vfinfo[vf].pf_set_mac && memcmp(adapter->vfinfo[vf].vf_mac_addresses, new_mac, ETH_ALEN)) {

      u8 *pm = adapter->vfinfo[vf].vf_mac_addresses;

      e_warn(drv,  "VF %d attempted to set a new MAC address but it already has an administratively set MAC address %2.2X:%2.2X:%2.2X:%2.2X:%2.2X:%2.2X\n",

                            vf, pm[0], pm[1], pm[2], pm[3], pm[4], pm[5]);

      e_warn(drv, "Check the VF driver and if it is not using the correct MAC address you may need to reload the VF driver\n");

      return -1;

  }

  return ixgbe_set_vf_mac(adapter, vf, new_mac) < 0;

}

 

However, according to the VMware document, it should be supported.   Why this contradiction happens?


Hi, can any one help me in find old stm-1 tranciever which was part of INFENION after Lantiq Gmbh own these product after INtel owned lantiq now can any one intel help me for these product or any other which was similar of these .compatible like these i

$
0
0

Hi,

can any one help me in find old stm-1 tranciever which was part of INFENION after Lantiq Gmbh  own these product after INtel owned lantiq now can any one intel help me for these product or any other which was similar of these .compatible like these in properties   Infineon AG  STM-1 01769527  V50017-R60-K802   Infineon AG  Stm-1 99669123  V50017_R60-K802

  regrds

shahroz

compass pakistan

VLAN creation on Windows 10 Enterprise TP

$
0
0

Hello, there.

 

This morning I upgraded my fully functionnal Windows 8.1 Enterprise installation to Windows 10 Technical Preview. Before that, I downloaded the Intel Network Adapter Driver from this website, version 20.1, for Windows 10 64 bits. After the driver installation, I had the VLANs tab in the network card properties. However, i'm unable to create a VLAN. The network card is automatically disabled then I receive an error message saying this (translated from french):

 

One or more vlans could not be created. Please check the adapter status and try again.


The window freezes and I have to force-close it. 802.1 option is of course enabled in the Advanced options tab. The event viewer always shows the same error when I try to create a VLAN:


Nom de l’application défaillante NCS2Prov.exe, version : 20.1.1021.0, horodatage : 0x554ba6a4

Nom du module défaillant : NcsColib.dll, version : 20.1.1021.0, horodatage : 0x554ba57d

Code d’exception : 0xc0000005

Décalage d’erreur : 0x0000000000264064

ID du processus défaillant : 0x19d4

Heure de début de l’application défaillante : 0x01d0ada33fd50576

Chemin d’accès de l’application défaillante : C:\Program Files\Intel\NCS2\WMIProv\NCS2Prov.exe

Chemin d’accès du module défaillant: C:\WINDOWS\SYSTEM32\NcsColib.dll

ID de rapport : eefb5842-9220-4bad-93d3-774828c5736e

Nom complet du package défaillant :

ID de l’application relative au package défaillant :

 

I already tried to uninstall all the packages and drivers related to the network card. I deleted fantom network cards then cleaned up the registry. I tried to set some compatibility options to the given executable file, with no success. I tried to reinstall the driver with Drivers Signature disabled, tried to disable IPv4/IPv6 from the network card before trying to add a VLAN... I tried everything I found on Google.

 

Could someone help me, please?

patches to disable anti-spoofing in VF (igb & ixgbe)

$
0
0

Attached please find two patches for the igb and ixdbe drivers that allow a guest instance to set the MAC address.

 

To enable such features, you need to run this on the host:

                  ip link set dev <interface_name>  vf  < index>    spoofchk off

 

The igb patch is based on version 5.3.3.5.

The ixgbe patch is based on redhat el7 3.10.0.-327.4.5.el7 kernel.  (i.e. linux-3.10.0-327.4.5.el7.x86_64/drivers/net/ethernet/intel/ixgbe)

 

Let me know if that make sense?

Queue issue about XL710

$
0
0

Hi,

 

I met some issues with XL710 NIC on FreeBSD10.1.

The NIC is a dual-port 40G card with the latest NVM image. CPU is 2690 V2 with hyperthreading to 40 logical cores.

Part number as XL710-QDA2 932587.

Firmware info:  f5.0 a1.5 n05.02 e80002285

Driver info: 1.4.8.

 

Issue1:

Within 20 queues only 16 of them are receiving packets (que0 - que15),

Que16-que19 are generating interrupt but no packet available, status/error register value are all 0.

 

AN# sysctl dev.ixl.2 | grep rx_packet

dev.ixl.2.pf.que0.rx_packets: 8805610

dev.ixl.2.pf.que1.rx_packets: 9348787

dev.ixl.2.pf.que2.rx_packets: 8505388

dev.ixl.2.pf.que3.rx_packets: 9928934

dev.ixl.2.pf.que4.rx_packets: 3879181

dev.ixl.2.pf.que5.rx_packets: 5579101

dev.ixl.2.pf.que6.rx_packets: 4322241

dev.ixl.2.pf.que7.rx_packets: 5788389

dev.ixl.2.pf.que8.rx_packets: 5575891

dev.ixl.2.pf.que9.rx_packets: 3743182

dev.ixl.2.pf.que10.rx_packets: 4127868

dev.ixl.2.pf.que11.rx_packets: 3878214

dev.ixl.2.pf.que12.rx_packets: 4013070

dev.ixl.2.pf.que13.rx_packets: 5146686

dev.ixl.2.pf.que14.rx_packets: 4024729

dev.ixl.2.pf.que15.rx_packets: 4736103

dev.ixl.2.pf.que16.rx_packets: 0

dev.ixl.2.pf.que17.rx_packets: 0

dev.ixl.2.pf.que18.rx_packets: 0

dev.ixl.2.pf.que19.rx_packets: 0


Issue2:

The NIC may hang under heavy stress, like putting 60Gbps traffic into it.

It may stop generating interrupt in such case.

 

Could you please help check the above issues?

Great thanks.

 

Regards,

Jingxun

Constant cpu usage (20%) by Intel PROSet Monitoring Service

$
0
0

Hello there,

 

I recently noticed that cpu isconstantly being used by Intel PROSet Monitoring Service. About 20%.

 

At first I noticed that Svchost.exe was using 20% but  ProcessExplorer suggested that it was Intel PROSet Monitoring Service that was running svchost that high.

 

When I stop Intel PROSet Monitoring Service, cpu goes back to 0%-1% when idle.

 

I see no problem at all after stopping the service.

 

Can you guys tell me the prupose of it and why it would contantly use 20% of my cpu?

 

French guy here so go easy on the writing critics!

 

Thanks.

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>