Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

Which Intel NICs support hardware timestamps (hardstamps)

$
0
0

I am trying to build a precise packet capturing solution and therefore need hardware support. Some resources on the Internet tell that the authors are using Intel hardware with hardstamp support, but I cannot find a list of devices or data sheets that indicate support.


X540 Operating Temperature

$
0
0

Hi,

We are interested to use the X540 Twinville Dual Port 10GbE MAC/PHY in our application. The marketing data sheet

 

Intel® Ethernet Controllers and PHYs

 

lists the operating temperature at 0-55C. Yet the data sheet

 

http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/ethernet-x540-datasheet.pdf

 

on pg 1188 lists a maximum case temperature of Tcase Max = 107C.

 

Please elaborate on the difference and meaning between these two figures. We need the 0-70C temperature range if we are to use this part.

 

Thank you.

Unable to use 802.1x on Wired Intel cards

$
0
0

Hello,

 

We are currently testing the implementation of 802.1x authentication using a computer certificate on our network.

The laptops used are Dell Latitude 6220 and Dell Latitude 7240 (I218-LM and 82579LM cards) and a Targus Universal USB Docking Station. (OS: Windows 7 Enterprise SP1)

 

When testing the configuration, the laptops are not able to authenticate when I use the Intel NIC giving this error:

 

Wired 802.1X Authentication failed.

Network Adapter: Intel(R) Ethernet Connection I218-LM
Interface GUID: {fce65701-1056-495e-8d36-f2c7b29dd4a2}
Peer Address: CCEF48D73A6B
Local Address: ECF4BB1443D7
Connection ID: 0x9
Identity: -
User: -
Domain: -
Reason: 0x50006
Reason Text: The authenticator is no longer present
Error Code: 0x0

 

Sometimes, I receive a strange "succeeded" message telling that the network does not support authentication then restart the authentication process :

 

Wired 802.1X Authentication succeeded.

Network Adapter: Intel(R) Ethernet Connection I218-LM
Interface GUID: {fce65701-1056-495e-8d36-f2c7b29dd4a2}
Peer Address: CCEF48D73A6B
Local Address: ECF4BB1443D7
Connection ID: 0xa
Identity: -
User: -
Domain: -
Reason: 0x70003
Reason Text: The network does not support authentication
Error Code: 0x0

 

When switching to the Embedded adapter in the Targus Docking station, I'm authenticated in less than 10 seconds:

 

Wired 802.1X Authentication succeeded.

Network Adapter: Targus Giga Ethernet
Interface GUID: {2cdbb8b0-0eec-42c3-a57b-7b8bc22ab354}
Peer Address: CCEF48D73A6B
Local Address: 0050B668D39D
Connection ID: 0x1
Identity: host/PC0022110.NBB.LOCAL
User: -
Domain: -
Reason: 0x0
Reason Text: The operation was successful
Error Code: 0x0

 

I've updated the laptops to the latest BIOS available, updated the Intel drivers to the latest available without success.

Any idea of what is causing this problem?

 

Gerald

i210 Windows 10 slow upload issue

$
0
0

Hi, I have a SuperMicro C7Z97-MF motherboard with an onboard Intel i210 network adapter. This computer is on a 1 Gbps internet connnection. The problem is on Windows 10 I will get around 950~ Mbps download speeds from pretty much any location, but when it comes to upload speeds the performance seems to seriously degrade as latency rises. With only 45 latency the maximum upload speed seems to be about 35 Mbps.

 

This issue does not occur when I tested with a live Ubuntu CD. With Ubuntu the download and upload was over 900 Mbps to any location.

 

On Windows 10 I have tried using the network adapter driver from the SuperMicro website for that particular board and I have also tried many other variations of newer and older Intel PROSet versions. I tried tweaking pretty much every option in the network adapter settings as well as tweaking every option in the TCP Optimizer tool. Nothing seems to get the upload speed up to its proper speed.

 

Something I did notice was when I tried setting the Link Speed to 100 Mbps Half Duplex the upload speed went up from 35 Mbps to about 75 Mbps to the specific server I was testing against. Obviously I want 1 Gbps Full Duplex though as 100 Mbps won't do.

 

Does anyone have any suggestions on what this issue could possibly be? I've been trying to figure it out for well over 2 months now and haven't gotten anywhere.

How to perform port mirroring from vf function with SRIOV

$
0
0

I would like to tap VF interfaces from the hypervisor. I'm running 82599 with SRIOV enabled (ixgbe). Does anybody have any pointers on this?

Intel PRO/1000 GT Dual server adapter EEPROM test failed

$
0
0

Network card Intel PRO / 1000 GT Dual server adapter has been installed on the server. If the test produces an error EEPROM test failed, there have been reinstalled the driver, but the problem remained. Tell me what is connected with the problem and how to fix it.

Number of devices per QSFP+ port

$
0
0

Hi,

 

I am planning to buy a network card for a new academic research server, which will be used in a SFP+ environment. I would like to evaluate the X520-DA2 (SFP+) and the X520-QDA1 (QSFP+).

I wonder if a X520-QDA1 card can be directly linked with two different SFP+ devices simultaneously, using a Direct Attach cable, such as the X4DACBL3, that provides 4x10 GbE connections.

 

If that is correct, how many network devices/interfaces will be available at OS level.

 

From the ixgbe driver's README:

 

"- 82599-based QSFP+ adapters only support 4x10 Gbps connections.

  1x40 Gbps connections are not supported. QSFP+ link partners must be

  configured for 4x10 Gbps."

 

 

Does it mean that the card provides four interfaces (i.e, eth0, ... eth3)?

 

Best,

 

Topo

I217-LM - no RSS possible under Win 8.1 Pro x64 - driver setting without effect

$
0
0

Hi,

 

I'm using an ASRock Rack EPC612D8A-TB motherboard with two Intel onboard NICs (I210 (Ethernet 2) and I217-LM (Ethernet)), running Win 8.1 Pro x64 with all Windows Updates installed and the latest Intel Ethernet driver package (20.2.3001.0). My problem is with the I217-LM, according to the Intel spec sheet it does support RSS (cp. figure on page 2).

In the I217-LM's advanced driver settings there is an option to enable and disable RSS. However Windows itself always says that the I217-LM is not RSS-capable.

i217-lm-no-rss-womac.png

The same option in the I210 driver options has the expected effect: With the PowerShell command get-SmbClientNetworkInterface you can see the RSS capability changing from True to False and vice versa.

 

Can anyone tell why the I217-LM is not getting the RSS feature?

 

Further system details:
CPU: Xeon E5-1620-V3

RAM: 2 x 16 GiB Crucial DDR4-2133 ECC

The motherboard does not have a later BIOS/UEFI release than the one installed.

 

Thank you very much for your help!


Expert needed: smtp.gmail.com connection timeout issue with NIC

$
0
0

For years, my Gmail related accounts ran perfectly. Then one day, something very strange happened.

 

The first time I noticed this problem was when mail, I was sending, were no longer leaving my outbox; and this was for Gmail related emails ONLY, all other non-Gmail accounts sent as usual. The only time emails would go through my Gmail outbox is when they were text only (non-html).

 

Two of my Gmail accounts are through Google Apps, and one is a basic Gmail account; two are setup on Outlook 2010 and one is on Windows Mail (Windows 7). Everything is on the same PC. Other PCs in the same room, with the same version of Outlook and tested with the same Gmail accounts, and hooked up to the same router do not have this issue.

 

I checked Google's Apps Status Dashboard and everything checked out fine.

 

I noticed that when I disabled my Intel(R) Ethernet Connection I217-V and plugged in my Wireless (Linksys AE2500) dongle, emails started to leave each account's outbox again. So I unplugged the dongle and then tested with a new cable, switched router ports, etc. and still the same issue.

 

I assumed the Intel(R) Ethernet Connection I217-V was at fault so I purchased the Intel(R) Gigabit CT Desktop Adapter and installed it yesterday.

 

With the old Intel NIC disabled, the wireless dongle unplugged, and only the new Intel NIC now in use, I still have the same problem. Every email sent specifically from any of my Gmail accounts started to collect in the outbox once again.

 

I've tried to restore my system to the earliest point I found, reset internet settings, reset network settings, turn off my firewall, disable my virus protection, flushdns, etc. and no joy.

 

I went through the https://support.google.com/mail/topic/3398031?hl=en and did the troubleshooting with telnet, etc. only to end up with "The server errors you're experiencing are most often temporary and will resolve themselves within 24 hours. If you continue to have problems after 24 hours, visit the Gmail help forum for more assistance."

 

I've managed to get by with the wireless dongle, but I'd like to get this resolved.

 

I'm not sure what I should be checking next. It's so odd. Should I be checking a BIOS setting for the NIC?

VLAN creation on Windows 10 Enterprise TP

$
0
0

Hello, there.

 

This morning I upgraded my fully functionnal Windows 8.1 Enterprise installation to Windows 10 Technical Preview. Before that, I downloaded the Intel Network Adapter Driver from this website, version 20.1, for Windows 10 64 bits. After the driver installation, I had the VLANs tab in the network card properties. However, i'm unable to create a VLAN. The network card is automatically disabled then I receive an error message saying this (translated from french):

 

One or more vlans could not be created. Please check the adapter status and try again.


The window freezes and I have to force-close it. 802.1 option is of course enabled in the Advanced options tab. The event viewer always shows the same error when I try to create a VLAN:


Nom de l’application défaillante NCS2Prov.exe, version : 20.1.1021.0, horodatage : 0x554ba6a4

Nom du module défaillant : NcsColib.dll, version : 20.1.1021.0, horodatage : 0x554ba57d

Code d’exception : 0xc0000005

Décalage d’erreur : 0x0000000000264064

ID du processus défaillant : 0x19d4

Heure de début de l’application défaillante : 0x01d0ada33fd50576

Chemin d’accès de l’application défaillante : C:\Program Files\Intel\NCS2\WMIProv\NCS2Prov.exe

Chemin d’accès du module défaillant: C:\WINDOWS\SYSTEM32\NcsColib.dll

ID de rapport : eefb5842-9220-4bad-93d3-774828c5736e

Nom complet du package défaillant :

ID de l’application relative au package défaillant :

 

I already tried to uninstall all the packages and drivers related to the network card. I deleted fantom network cards then cleaned up the registry. I tried to set some compatibility options to the given executable file, with no success. I tried to reinstall the driver with Drivers Signature disabled, tried to disable IPv4/IPv6 from the network card before trying to add a VLAN... I tried everything I found on Google.

 

Could someone help me, please?

intel nic drivers 19.3 huge 6000+ dpc latency spike once a few secs

$
0
0

hi, i would like to report that the new intel nic drivers version 19.3 that just released recently has huge 6000+ dpc latency spike once in a few secs.

 

my specs

 

Intel(R) 82579LM Gigabit Network Connection

 

windows 7 SP1 32bit + lastest windows update

 

i downgrade to intel nic drivers previous version 19.1 and the problem is just gone.

Intel 82579V with Windows 10

$
0
0

Hello,

 

last week i tried to install Windows 10 and recognized that the actual driver from here

Download Network Adapter Driver for Windows® 10

does not support the VLAN Feature. Because i need the VLAN function it is not possible for me to use Windows 10 at the moment.

i have found another entry in the community from September (Intel 82579V issues on Windows 10) where it is said that ANS and VLAN will be supported in future releases.

Is there any suggested timeline for such a release ?

 

Best Regards

 

Sven

Unable to update i350-T4 NIC flash.

$
0
0

I've been trying to update the flash on an Intel i350-T4 NIC I got off Amazon but for some reason bootutil won't let me.

 

First when I just run bootutil it show all four ports on the card to be in "FLASH not present" mode.

 

When I run "bootutil -nic=1 -fe" it says the command has succeeded and to reboot to enable flash. After a reboot nothing has changed and trying to update the flash with "bootutil -nic=1 -up=pxe -file=bootimg.flb" results in a "Flash is not enabled on port 1" error (using the Intel bootutil) or an "Adaptor port is not bootable on port 1" error is using an IBM or Dell version of bootutil.

 

Trying to execute "bootutil -nic=1 -bootenable=pxe" with either Dell or IBM bootutil gives the same "Adaptor port is not bootable on port 1" error. Using the Intel bootutil for the same commands gives a "Found discrete ROM in the flash for NIC 1" error.

 

I'm at a loss as to how to update the flash. Tried to get the Cisco version of bootutil to see if that worked any better but the Cisco support site is a mess and won't let me login to download their tools.

 

Does anybody have any suggestion on any other version of bootutil to use or anything else to try to update the flash on this card? I tried to contact the Amazon vendor to find out exactly what system manufacturer's server this card was taken out of but haven't had any luck yet.

X710 Ethernet Adapter not Being Brought Up by IXGBE Driver

$
0
0

Greetings Wired Ethernet,

 

So I have two X710-DA4 adapters installed into two different servers (different motherboards on each) experiencing the same problem on both bringing up the interfaces.

 

I'm running SLES 11 SP 3 on both (I believe this issue would exist on RHEL or SLES 11 sp4 as well though, still waiting to verify)

 

lspci | grep Eth output:

 

01:00.0 Ethernet controller: Intel Corporation Device 1572 (rev 01)

01:00.1 Ethernet controller: Intel Corporation Device 1572 (rev 01)

01:00.2 Ethernet controller: Intel Corporation Device 1572 (rev 01)

01:00.3 Ethernet controller: Intel Corporation Device 1572 (rev 01)

42:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

42:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

42:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

42:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

 

The Device 1572 is the X710 DA4 interfaces, but ifconfig -a only show's the on-board 1G I350 interfaces.

 

I've updated to ixgbe driver 4.1.5 , when I rmmod and insmod ixgbe , dmesg only reports:

 

[  101.916366] Disabling lock debugging due to kernel taint

[  101.918805] Intel(R) 10 Gigabit PCI Express Network Driver - version 4.1.5

[  101.918809] Copyright (c) 1999-2015 Intel Corporation.

[ 1370.304497] Intel(R) 10 Gigabit PCI Express Network Driver - version 4.1.5

[ 1370.304503] Copyright (c) 1999-2015 Intel Corporation.

 

I also found two versions of NVM Update Tool (1.24.33.08) and (1.25.20.12)

 

It shows the following output on both:

 

Num Description                            Device-Id B:D   Adapter Status

=== ====================================== ========= ===== ====================

01) Intel(R) I350 Gigabit Network Connecti 8086-1521 66:00 Update not available

02) Intel(R) Ethernet Converged Network Ad 8086-1572 01:00 Access error

 

I looked at some of the BIOS PCI settings, but I wouldn't know which ones to tune that might hint at what is happening. A verbose look at lspci for those interfaces show that the PCI device is training up at gen3 x8 successfully:

 

01:00.0 Ethernet controller: Intel Corporation Device 1572 (rev 01)

        Subsystem: Intel Corporation Device 0001

        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-

        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

        Latency: 0, Cache Line Size: 64 bytes

        Interrupt: pin A routed to IRQ 11

        Region 0: Memory at 383ffe800000 (64-bit, prefetchable) [size=8M]

        Region 3: Memory at 383fff818000 (64-bit, prefetchable) [size=32K]

        Expansion ROM at ab280000 [disabled] [size=512K]

        Capabilities: [40] Power Management version 3

                Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)

                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-

        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+

                Address: 0000000000000000  Data: 0000

                Masking: 00000000  Pending: 00000000

        Capabilities: [70] MSI-X: Enable- Count=129 Masked-

                Vector table: BAR=3 offset=00000000

                PBA: BAR=3 offset=00001000

        Capabilities: [a0] Express (v2) Endpoint, MSI 00

                DevCap: MaxPayload 2048 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us

                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+

                DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported-

                        RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-

                        MaxPayload 256 bytes, MaxReadReq 512 bytes

                DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-

                LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L1, Latency L0 <2us, L1 <16us

                        ClockPM- Surprise- LLActRep- BwNot-

                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+

                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-

                LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+

                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-

                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB

                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-

                         Compliance De-emphasis: -6dB

                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+

                         EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-

        Capabilities: [100 v2] Advanced Error Reporting

                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-

                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

                AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-

        Capabilities: [140 v1] Device Serial Number 8c-7f-43-ff-ff-ed-e0-00

        Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)

                ARICap: MFVC- ACS-, Next Function: 1

                ARICtl: MFVC- ACS-, Function Group: 0

        Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)

                IOVCap: Migration-, Interrupt Message Number: 000

                IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy+

                IOVSta: Migration-

                Initial VFs: 32, Total VFs: 32, Number of VFs: 32, Function Dependency Link: 00

                VF offset: 16, stride: 1, Device ID: 154c

                Supported Page Size: 00000553, System Page Size: 00000001

                Region 0: Memory at 0000383fff600000 (64-bit, prefetchable)

                Region 3: Memory at 0000383fff9a0000 (64-bit, prefetchable)

                VF Migration: offset: 00000000, BIR: 0

        Capabilities: [1a0 v1] Transaction Processing Hints

                Device specific mode supported

                No steering table available

        Capabilities: [1b0 v1] Access Control Services

                ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

                ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

        Capabilities: [1d0 v1] #19

 

 

 

Any help or ideas would be greatly appreciated on next steps or data collection that might be useful.

Intel X540-T1 10GigE NIC works, but extremely slow, high latency on Ubuntu 14.04 LTS

$
0
0

I have a new Intel X540-T1 network adapter. I installed the latest ixgbe-4.1.5 driver from Intel. The card is able to get an IP address from DHCP. However, viewing websites and installing packages with apt-get are extremely slow, and BitTorrent just doesn't connect. For example, I downloaded Webmin with apt-get to see if it could be of help. It took a few tries before apt-get was finally able to resolve the domain names, then it took a few minutes to download ~20 megs. However, normally a download like that would take a minute or so (I'm connected to a fast university LAN connection).

Any thoughts? Might some parameters for the ixgbe module help, and if so, which?

$ uname -a

Linux localhost 3.13.0-66-generic #108-Ubuntu SMP Wed Oct 7 15:20:27 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

 

$ ifconfig p1p1

p1p1 Link encap:Ethernet HWaddr a0:36:9f:75:5c:ca 

  UP BROADCAST MULTICAST MTU:1500 Metric:1

  RX packets:0 errors:0 dropped:0 overruns:0 frame:0

  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:1000

  RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

 

$ ethtool -i p1p1

driver: ixgbe

version: 3.15.1-k

firmware-version: 0x8000037c

bus-info: 0000:01:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: no

 

$ sudo lspci -vvnns 01:00.0

01:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01)

  Subsystem: Intel Corporation Ethernet Converged Network Adapter X540-T1 [8086:0002]

  Physical Slot: 2

  Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+

  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

  Latency: 0

  Interrupt: pin A routed to IRQ 26

  Region 0: Memory at f2200000 (64-bit, prefetchable) [size=2M]

  Region 4: Memory at f2400000 (64-bit, prefetchable) [size=16K]

  Expansion ROM at fb100000 [disabled] [size=512K]

  Capabilities: [40] Power Management version 3

  Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold-)

  Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME-

  Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+

  Address: 0000000000000000 Data: 0000

  Masking: 00000000 Pending: 00000000

  Capabilities: [70] MSI-X: Enable+ Count=64 Masked-

  Vector table: BAR=4 offset=00000000

  PBA: BAR=4 offset=00002000

  Capabilities: [a0] Express (v2) Endpoint, MSI 00

  DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us

  ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+

  DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+

  RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-

  MaxPayload 256 bytes, MaxReadReq 512 bytes

  DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-

  LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s L1, Exit Latency L0s <1us, L1 <8us

  ClockPM- Surprise- LLActRep- BwNot-

  LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+

  ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-

  LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

  DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported

  DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled

  LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-

  Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-

  Compliance De-emphasis: -6dB

  LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-

  EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-

  Capabilities: [100 v2] Advanced Error Reporting

  UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-

  CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

  CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

  AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-

  Capabilities: [140 v1] Device Serial Number a0-36-9f-ff-ff-75-5c-ca

  Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)

  ARICap: MFVC- ACS-, Next Function: 0

  ARICtl: MFVC- ACS-, Function Group: 0

  Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)

  IOVCap: Migration-, Interrupt Message Number: 000

  IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy+

  IOVSta: Migration-

  Initial VFs: 64, Total VFs: 64, Number of VFs: 0, Function Dependency Link: 00

  VF offset: 128, stride: 2, Device ID: 1515

  Supported Page Size: 00000553, System Page Size: 00000001

  Region 0: Memory at 00000000fb280000 (64-bit, non-prefetchable)

  Region 3: Memory at 00000000fb180000 (64-bit, non-prefetchable)

  VF Migration: offset: 00000000, BIR: 0

  Capabilities: [1d0 v1] Access Control Services

  ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  Kernel driver in use: ixgbe

 

$ lsmod | grep ixg

ixgbe 233377 0

dca 15130 2 igb,ixgbe

ptp 18933 2 igb,ixgbe

mdio 13807 1 ixgbe

 

$ modinfo ixgbe

filename: /lib/modules/3.13.0-66-generic/kernel/drivers/net/ethernet/intel/ixgbe/ixgbe.ko

version: 4.1.5

license: GPL

description: Intel(R) 10 Gigabit PCI Express Network Driver

author: Intel Corporation, <linux.nics@intel.com>

srcversion: 9781CEF8A3110F93FF9DBA8

alias: pci:v00008086d000015ADsv*sd*bc*sc*i*

alias: pci:v00008086d00001560sv*sd*bc*sc*i*

alias: pci:v00008086d00001558sv*sd*bc*sc*i*

alias: pci:v00008086d0000154Asv*sd*bc*sc*i*

alias: pci:v00008086d00001557sv*sd*bc*sc*i*

alias: pci:v00008086d0000154Fsv*sd*bc*sc*i*

alias: pci:v00008086d0000154Dsv*sd*bc*sc*i*

alias: pci:v00008086d00001528sv*sd*bc*sc*i*

alias: pci:v00008086d000010F8sv*sd*bc*sc*i*

alias: pci:v00008086d0000151Csv*sd*bc*sc*i*

alias: pci:v00008086d00001529sv*sd*bc*sc*i*

alias: pci:v00008086d0000152Asv*sd*bc*sc*i*

alias: pci:v00008086d000010F9sv*sd*bc*sc*i*

alias: pci:v00008086d00001514sv*sd*bc*sc*i*

alias: pci:v00008086d00001507sv*sd*bc*sc*i*

alias: pci:v00008086d000010FBsv*sd*bc*sc*i*

alias: pci:v00008086d00001517sv*sd*bc*sc*i*

alias: pci:v00008086d000010FCsv*sd*bc*sc*i*

alias: pci:v00008086d000010F7sv*sd*bc*sc*i*

alias: pci:v00008086d00001508sv*sd*bc*sc*i*

alias: pci:v00008086d000010DBsv*sd*bc*sc*i*

alias: pci:v00008086d000010F4sv*sd*bc*sc*i*

alias: pci:v00008086d000010E1sv*sd*bc*sc*i*

alias: pci:v00008086d000010F1sv*sd*bc*sc*i*

alias: pci:v00008086d000010ECsv*sd*bc*sc*i*

alias: pci:v00008086d000010DDsv*sd*bc*sc*i*

alias: pci:v00008086d0000150Bsv*sd*bc*sc*i*

alias: pci:v00008086d000010C8sv*sd*bc*sc*i*

alias: pci:v00008086d000010C7sv*sd*bc*sc*i*

alias: pci:v00008086d000010C6sv*sd*bc*sc*i*

alias: pci:v00008086d000010B6sv*sd*bc*sc*i*

depends: ptp,dca,vxlan

vermagic: 3.13.0-66-generic SMP mod_unload modversions

parm: InterruptType:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default IntMode (deprecated) (array of int)

parm: IntMode:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default 2 (array of int)

parm: MQ:Disable or enable Multiple Queues, default 1 (array of int)

parm: DCA:Disable or enable Direct Cache Access, 0=disabled, 1=descriptor only, 2=descriptor and data (array of int)

parm: RSS:Number of Receive-Side Scaling Descriptor Queues, default 0=number of cpus (array of int)

parm: VMDQ:Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable (default=8) (array of int)

parm: max_vfs:Number of Virtual Functions: 0 = disable (default), 1-63 = enable this many VFs (array of int)

parm: VEPA:VEPA Bridge Mode: 0 = VEB (default), 1 = VEPA (array of int)

parm: InterruptThrottleRate:Maximum interrupts per second, per vector, (0,1,956-488281), default 1 (array of int)

parm: LLIPort:Low Latency Interrupt TCP Port (0-65535) (array of int)

parm: LLIPush:Low Latency Interrupt on TCP Push flag (0,1) (array of int)

parm: LLISize:Low Latency Interrupt on Packet Size (0-1500) (array of int)

parm: LLIEType:Low Latency Interrupt Ethernet Protocol Type (array of int)

parm: LLIVLANP:Low Latency Interrupt on VLAN priority threshold (array of int)

parm: FdirPballoc:Flow Director packet buffer allocation level:

  1 = 8k hash filters or 2k perfect filters

  2 = 16k hash filters or 4k perfect filters

  3 = 32k hash filters or 8k perfect filters (array of int)

parm: AtrSampleRate:Software ATR Tx packet sample rate (array of int)

parm: FCoE:Disable or enable FCoE Offload, default 1 (array of int)

parm: LRO:Large Receive Offload (0,1), default 1 = on (array of int)

parm: allow_unsupported_sfp:Allow unsupported and untested SFP+ modules on 82599 based adapters, default 0 = Disable (array of int)

parm: dmac_watchdog:DMA coalescing watchdog in microseconds (0,41-10000), default 0 = off (array of int)

parm: vxlan_rx:VXLAN receive checksum offload (0,1), default 1 = Enable (array of int)


XL710 priority queues

$
0
0

I have an XL710 VSI that is running in a VF and uses RSS queuing across multiple queues on input. I want to create a separate priority queue that will receive only certain frame types eg LACP. Is the best way to achieve this as follows:

1. create a second VSI on this VF (with the same MAC address) with one receive queue.

2. allocate an L2 filter on the PF as part of that VSI creation that will filter for the desired frames.

Then the desired frames will be received in the 2nd VSI's rcv queue?

Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Unix driver unwanted behavior when no SFP is attached to the interface

$
0
0

Hello,

 

 

We actually are making a POC to migrate our infra to 10Gb/s. We use RHEL6.7 on HP DL360p servers. The 10Gb/s cards are Intel Corporation 82599ES 10-Gigabit SFI/SFP+.

 

The ixgbe drivers we use are 4.0.1-k

 

It seems that when we start an 10Gb/s interface (ifconfig ethX up) without SFP connected on it, the average load of the server increase ( about + 0.5) and we encounter many freeze while using SSH ==>

 

We notice Unix events/X process in uninteruptible state  as follows :

for x in `seq 1 1 100`; do ps -eo state,pid,cmd | grep "^D"; echo "----"; sleep 0.1; done

D60 [events/9]

----

D60 [events/9]

----

D60 [events/9]

 

If we shut the interface (ifconfig ethX down), the problem disappears.

 

We did the same tests with 4.1.2 driver version and we don't have freezes anymore, but we still have process in uninteruptible state (but this time , it's ixgbe process):

for x in `seq 1 1 100`; do ps -eo state,pid,cmd | grep "^D"; echo "----"; sleep 0.1; done

D 61855 [ixgbe]

----

D 61855 [ixgbe]

 

 

When we start the interface with a SFP module connected on it (even without fiber), we don't see the problem (any driver version).

 

 

It looks like there is some kind of blocking behavior that occurs when we start an 10Gb/s interface without SFP connected on it (maybe some protocol advertisement, or something like that..).

 

Do you have some clue on this behavior ? I know there is no reason to start an interface without SFP connected, but still, maybe it could reveal some side effects ..

 

 

Kind regards,

 

Yann

Intel x520 Data Transfers

$
0
0

After speaking with an Intel rep this NIC does not have a built in DMA engine.

 

Does it provide some other data transfer mechanism that allows for programming the controller to specify source address, destination address, and buffer size (i.e., # bytes)

to selectively xfer large chunks of data using the x520 resources without impacting CPU performance?

Driver installation for Intel i219-V Ethernet Controller on ASUS H170i Plus D3 motherboard

$
0
0

I'm having a major spin cycle installing drivers for the i219-V controller chip with Server 2012 R2.  The correct driver appears to be Intel LAN Driver V20.2.3001.0 for Windows Win8.1 64bit.


ASUS doesn't "support" 2012 R2, so the ASUS installation application errors out as an unsupported operating system.  The Intel application doesn't see the controller as an Intel product (I assume because it is new). The "Update Driver" function in WinServer device manager doesn't see the chip in the device database.  Any suggestions?

x520 10GbE ethtool

$
0
0

I'm using an Intel® Ethernet Server Adapter X520-SR1 device in Ubuntu 12.04, kernel 3.16..0-52, to receive 10GbE packets.  The ixgbe driver module is version 3.19.1-k.

 

When I run the command ethtool -g eth6 I see the following:

Ring parameters for eth6:

Pre-set maximums:

RX :      4096

RX Mini:  0

RX Jumbo: 0

TX:       4096

Current hardware settings:

RX:       4096

RX Mini:  0

RX Jumbo: 0

TX:       512

 

I bumped up RX descriptors from 512 to 4096 to try and reduce packet loss.

 

However, I'm a bit confused why "RX Jumbo" says the pre-set max is 0.  I'm currently sending packets that are about ~4kB, so fall into the jumbo packet realm.  I increased MTU from 1500 to 9000 with ifconfig eth6 mtu 9000.  It's working fine (but some drops here and there).  The docs seem to indicate jumbo frames up to 9k are supported by the x520 series.

 

Should I worry about ethtool reporting 0 Rx Jumbo descriptors?  If so, do I just need a new ixgbe driver or ethtool update?

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>