Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

Please help with receive setup of a 82574 based NIC

$
0
0

I am developing a Miniport device driver for an 82574l based NIC on Windows CE 5.0, using NDIS 5.1. Transmission setup works OK and the NIC transmits OK. I setup receive as the documentation dictates:

  1. Allocated a region of memory for legacy receive descriptor list using NdisMAllocateSharedMemory and setup a descriptor ring.
  2. Setup receive buffers of appropriate size and allocated and pointers to these buffers and stored these in each descriptor in the descriptor ring.
  3. Programed the descriptor base address with the address of the descriptor ring.
  4. Set the length register to the size of the descriptor ring.

However the when I process receive related interrupts the memory buffers pointed to by the memory allocations I got from NDIS point to regions filled with 0s. Using the diagnostic registers it shows that data was received and the Receive Data FIFO Packet Count register indicates how many packets were received and analyzing the RX FIFO PBM region seems good. So I must be doing something really bad when setting up my RX registers to not be able to handle RX packets. I will gladly provide my code to anybody who can help me.


What's the difference between the in-kernel ethernet drivers and intel drivers like i40e, ixgbe, e1000e, igb?

$
0
0

We are in the process of build a new release for Seagate Linux OS. I'm now scratching my head whether to pick the in-kernel stock drivers or go with the latest intel drivers?

It's a CentOS based OS, so it's rpm based.

The target drivers that I'm looking at are: i40e, ixgbe, e1000e, igb

I have system with adapters that need either e1000e and/or igb/ixgbe, where as i40e would be needed for the HBA cards.

Please advise.

 

/home/kk

Ethernet card used in the TSN demo

$
0
0

Hi,

 

Recently I came across a TSN demo from NI using intel i210 Ethernet card. Following is the link to the demo video

 

02 NIWeek 2016 Day 2 TSN - YouTube

 

Please let me know, If anyone has any information about the Ethernet card used in the demo which supports TSN features.

 

Thanks,

Praveen Bajantri

X550BT2 does not work with IXGBE 4.3.15.

$
0
0

hi there.

 

"Network Adapter Driver for PCI-E * Intel® 10 Gigabit Ethernet Network Connections under Linux * Version: 4.3.15 (Latest)" was introduced, X550BT2 does not work.

 

WEB-Site

 

https://downloadcenter.intel.com/download/14687/Network-Adapter-Driver-for-PCI-E-Intel-10-Gigabit-Ethernet-Network-Connections-under-Linux-

This download is valid for the product (s) listed below.

"Intel® Ethernet Controller X550-BT2"

 

 

Status

 

Linux Kernel Version: 3.9.0

Target Device: X550BT2

 

 

console operation

 

$ lspci -d 8086:

01:00.0 Ethernet controller: Intel Corporation Device 1562 (rev 01)

01:00.1 Ethernet controller: Intel Corporation Device 1562 (rev 01)

 

$ lspci -nd 8086:

01:00.0 0200: 8086:1562 (rev 01)

01:00.1 0200: 8086:1562 (rev 01)

 

$ ifconfig -a

eth0      Link encap:Ethernet  HWaddr XX:XX:XX:XX:XX:XX

          inet addr:192.168.32.139  Bcast:192.168.32.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:3018 errors:0 dropped:10 overruns:0 frame:0

          TX packets:1453 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:313400 (306.0 KiB)  TX bytes:231368 (225.9 KiB)

          Interrupt:54 Base address:0xb000

 

lo        Link encap:Local Loopback

          LOOPBACK  MTU:65536  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

 

$ insmod ./ixgbe.ko

Intel(R) 10 Gigabit PCI Express Network Driver - version 4.3.15

Copyright (c) 1999-2015 Intel Corporation.

 

$ ifconfig -a

eth0      Link encap:Ethernet  HWaddr XX:XX:XX:XX:XX:XX

          inet addr:192.168.32.139  Bcast:192.168.32.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:3018 errors:0 dropped:10 overruns:0 frame:0

          TX packets:1453 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:313400 (306.0 KiB)  TX bytes:231368 (225.9 KiB)

          Interrupt:54 Base address:0xb000

 

lo        Link encap:Local Loopback

          LOOPBACK  MTU:65536  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

patch for spec file for out of tree i40 driver - 1.5.16

$
0
0

Hi,

 

The sed pattern to determine the kernel version is broken. It will end up stripping too much off.

for example /lib/modules/2.6.32-504.16.2.el6.x86_64 will get stripped back to 2.6.32

 

This is not the intent of the sed pattern and causes install problems as /lib/modules/2.6.32 doesn't exist

 

Patch

--- i40e.spec.orig2016-08-16 16:19:01.808859847 +0000
+++ i40e.spec2016-08-16 16:15:00.031131141 +0000

@@ -69,7 +69,7 @@

if [ "%{pcitable}" != "/dev/null" ]; then

echo "original pcitable saved in /usr/local/share/%{name}";

fi

-for k in $(sed 's/\/lib\/modules\/\([0-9a-zA-Z_\.\-\+]*\).*/\1/' $FL) ;

+for k in $(sed 's/\/lib\/modules\/\([0-9a-zA-Z_\.\+-]*\).*/\1/' $FL) ;

do

d_drivers=/lib/modules/$k
d_usr=/usr/local/share/%{name}/$k

@@ -90,7 +90,7 @@

done

 

# Check if kernel version rpm was built on IS the same as running kernel

-BK_LIST=$(sed 's/\/lib\/modules\/\([0-9a-zA-Z_\.\-\+]*\).*/\1/' $FL)

+BK_LIST=$(sed 's/\/lib\/modules\/\([0-9a-zA-Z_\.\+-]*\).*/\1/' $FL)

MATCH=no

for i in $BK_LIST

do

x710 network link is disconnected

$
0
0

I have been experiencing NIC drops on multiple servers across our organization for over 6 months.  I have had number cases open with Dell and Microsoft unsure if our 10gb switches were causing the issue or the OS or the NIC hardware.  I can confidently say that the common factor is the X710 network card in all cases as we have switched out our 10gb switches to Cisco now and we are still seeing this issue. 

The system event log shows an event 27 - Intel(R) Ethernet Converged Network Adapter X710 #X    Network link is disconnected

There is no rhyme or reason as to when this happens, but I can guarantee at least 1 time per week or more I see this event.

I have a case open with Dell, again, and they had me update the firmware and drivers on the cards to:   Firmware family: 17.5.11  and  Driver Provider Intel, driver date 3/22/2016 1.3.115.0

These are 2 PCIe cards installed on a Dell PowerEdge R430.

During troubleshooting I disabled the following nic features:

* Encapsulated Task Offload

* IPv4 Checksum Offload

* Large Send Offload V2 (IPv4)

* Large Send Offload V2 (IPv6)

* TCP Checksum Offload (IPv4)

* TCP Checksum Offload (IPv6)

* UDP Checksum Offload (IPv4)

* UDP Checksum Offload (IPv6)

* Virtual machine Queues

 

 

Since making the changes to the 4 10gb nics on 1 of my 2012R2 Hyper-V hosts I have gone 3 weeks without an Event 27 NIC drop.

I did NOT make the change on my other 2 hosts, 1 hasn't dropped in 3 weeks and the other was good for 2 and now started dropping the last 4 days in a row, sometimes multiple times a day.

 

I have done hours and hours of research and just don't know if I need all these offloads disabled and what will happen to my virtual machines performance in my cluster if we turn off offloading.

Is there a driver update for these cards so we can enable offloading again or is this the only way these cards work?  Does the disconnect happen because of load on the NIC and that is why it is so random, it shuts itself down if it gets too much traffic?

 

Thanks in advance for any insight into this problem.

I211/I217-V Windows 10 LACP teaming fails

$
0
0

Hello,

 

after the update to Windows 10 (x64, Build 10240) the creation of a teaming group (static or IEEE802.3ad) with a I211+I217-V NIC fails.

 

Drivers have been upgraded to the latest version available and multiple reinstallations with reboots din't help either. Whenever the group creation wizzard is used and a groupname (several tried), the adapters and LACP have been selected, a Windows pop-up appears to tell me group creation has failed.

However the Windows Device Manager shows a newly created "Intel Advanced Network Services Virtual Adapter", so some kind of configuration seems to get done.

Using Windows 7 SP1 x64 the exact same setup worked flawlessly for months, so Win10/the driver are the likely culprit.

 

Is anyone experiencing similar problems and/or is this a known bug? Feedback on this issue is greatly appreciated.

 

Thanks in advance!

 

Kind regards,

Famaku

Intel X710 (X710DA4) link Down

$
0
0

Hi!

 

 

 

Neededhelp!

 

I have the adapter X710DA4 and  X710DA2, and using LR SFP Modules Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT.

There is a problem for inclusion in Juniper switch, link is down 1GB mode between devices.

 

Try to off auto-negotiation or/and change speed to 1000 get:

@ubuntu:~$ sudo ethtool -s p2p1 autoneg off speed 1000 duplex full

Cannot get current device settings: No such device

 

  not setting speed

  not setting duplex

  not setting autoneg

 

@ubuntu:~$ sudo ethtool -i p2p1

driver information:

driver: i40e

version: 1.5.16

firmware-version: 4.42 0x80001921 0.0.0

bus-info: 0000:0a:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

 

 

 

 

It is in the caseof a loopbetween the interfaceson the X710:

@ubuntu:~$ sudo ethtool p2p1

        Supported ports: [ FIBRE ]

        Supported link modes:   1000baseT/Full

                                            10000baseT/Full

        Supported pause frame use: Symmetric

        Supports auto-negotiation: No

        Advertised link modes:  10000baseT/Full

        Advertised pause frame use: No

        Advertised auto-negotiation: No

        Speed: 10000Mb/s

        Duplex: Full

        Port: FIBRE

        PHYAD: 0

        Transceiver: external

        Auto-negotiation: off

        Supports Wake-on: d

        Wake-on: d

        Current message level: 0x0000000f (15)

                               drv probe link timer

        Link detected: yes

 

Need a solution what to do to make the link between the X710 and Juniper stood up to the work, if possible.


Packet Loss after Windows Updates

$
0
0

Hi *.

 

after recent Windows Updates, I experience about 20% packet loss of all network connection through a Windows Server 2008 R2 Hyper-V vm running on a Windows Server 2016 TP5 host. Were there any changes to the Intel drivers recently?

 

/Klaus

I need drivers (windows 10 x64) for creating vlan´s!

$
0
0

i don´t find them, can you help me? thanks

XL710-QDA2 directly connect two computers - no switch, is this possible?

$
0
0

I have two computers, each with a XL710-QDA2, currently running Windows Server 2016 Tech Preview 5.

Is it possible to directly connect these two with the QSFP+ cable?

This does not seem to be working for me

i350 - enable critical session

$
0
0

Hi,

 

I recently bought an i350 ethernet controller. I would like to enable the "critical manageability session" of the card, but I can't find any documentation that explains how to do that. The only document related to my question is https://www-ssl.intel.com/content/dam/www/public/us/en/documents/guides/maintaining-the-ethernet-link-to-the-BMC.pdf#pag…  which says to refer to the i350 documentation. I did... but I was not able to find any concrete information in it. Any hint?

 

Thanks!

Intel Teaming - Adaptor link down

$
0
0

Dear Friends, Problem:- We are facing frequent Adaptor link down & teaming failure/ switch over messages(Information/warning) in the windows event viewer. Our architecture:- We have total 14 PC in a network with intel teaming cards installed on them. Out of 14 , 10 machines are Dell make, 7 Nos. - Model Dell Optiplex 3010  and 2 Nos.- Model Optiplex 3020  and 1 No. Dell Precision . The cards used for teaming are 1. Intel PRO/1000 PT Server - Adapter (EXPI9400PT)             2. Intel Gigabit CT Desktop - Adapter (EXPI9301CT) We have configured the Teaming type as " Switch Fault Tolerant". Set the PT card as Primary and CT as secondary. We are getting messages of teaming failure for link down on these 10 dell machines randomly. We are not able to identify the exact rootcause for this problem. And along with these errors we are also getting blue screens on machines randomly. So we would like to know how to trace out the problem and is there any link between these messages and blue screen. Ex. of messages from event log iANSMiniport 11 None Adapter link down: Intel(R) Gigabit CT Desktop Adapter iANSMiniport 11 None Adapter link down: Intel(R) Gigabit CT Desktop Adapter e1qexpress 27 None "Intel(R) Gigabit CT Desktop Adapter Network link is disconnected. " iANSMiniport 11 None Adapter link down: Intel(R) PRO/1000 PT Server Adapter iANSMiniport 11 None Adapter link down: Intel(R) PRO/1000 PT Server Adapter Service Control Manager 7036 None The Distributed Link Tracking Client service entered the stopped state. Service Control Manager 7036 None The Distributed Link Tracking Client service entered the running state. Please guide.

VLMB ANS Team on Hyper-V 2012 R2 with VMQ / VMDq

$
0
0

Hi,

 

we just bought some new Dell Poweredge R730 with Intel Ethernet 10G 4P X540/I350 Network Adapter for using them Hyper-V 2012 R2.

 

According to this document it's possible to create a VLMB Team with Intel ANS Teaming and enable VMDq, so Hyper-V will make use of VMQ.

 

I installed the latest version of Intel PROSet, created a VMLB Team with Intel ANS and enabled virtual machine Queues:

 

PS C:\Windows\system32> Get-IntelNetAdapterSetting

   Name: TEAM: Team #0 - Intel(R) Ethernet 10G 4P X540/I350 rNDC

DisplayName            DisplayValue    RegistryKeyword         RegistryValue
-----------            ------------    ---------------         -------------
Low Latency Interrupts Disabled        EnableLLI               0
Profile                Custom Settings PerformanceProfile      1
Flow Control           Rx & Tx Enabled *FlowControl            3
Header Data Split      Disabled        *HeaderDataSplit        0
Interrupt Moderation   Enabled         *InterruptModeration    1
IPv4 Checksum Offload  Rx & Tx Enabled *IPChecksumOffloadIPv4  3
Jumbo Packet           Disabled        *JumboPacket            1514
Large Send Offload ...                 LSO                     1
Large Send Offload ... Enabled         *LsoV2IPv6              1
Maximum Number of R... 16              *MaxRssProcessors       16
Preferred NUMA node    System Default  *NumaNodeId             65535
Maximum Number of R... 8 Queues        *NumRssQueues           8
NDIS QOS               Disabled        DCB                     0
Recv Segment Coales... Disabled        *RSCIPv4                0
Recv Segment Coales... Disabled        *RSCIPv6                0
Receive Side Scaling   Enabled         *RSS                    1
RSS load balancing ... NUMAScalingS... *RSSProfile             4
Speed & Duplex         Auto Negotia... *SpeedDuplex            0
SR-IOV                 Disabled        *SRIOV                  0
TCP Checksum Offloa... Rx & Tx Enabled *TCPChecksumOffloadIPv4 3
TCP Checksum Offloa... Rx & Tx Enabled *TCPChecksumOffloadIPv6 3
UDP Checksum Offloa... Rx & Tx Enabled *UDPChecksumOffloadIPv4 3
UDP Checksum Offloa... Rx & Tx Enabled *UDPChecksumOffloadIPv6 3
Virtual Machine Queues Enabled         *VMQ                    1
Wake on Magic Packet   Disabled        *WakeOnMagicPacket      0
Wake on Pattern Match  Enabled         *WakeOnPattern          1
Wake on Magic Packe... Disabled        EnablePME               0
Interrupt Moderatio... Adaptive        ITR                     65535
Log Link State Event   Enabled         LogLinkStateEvent       51
Wake on Link Settings  Disabled        WakeOnLink              0
SR-IOV VPorts          0               *NumVFs                 0
VMQ VPorts             63              VMQueues                63
Receive Buffers        512             *ReceiveBuffers         512
Starting RSS CPU       0               *RssBaseProcNumber      0
Transmit Buffers       512             *TransmitBuffers        512
Low Latency Interru... {}              LLIPorts                {}
Locally Administere... 246E96071AE0    NetworkAddress          246E96071AE0

   Name: TEAM: Team #0 - Intel(R) Ethernet 10G 4P X540/I350 rNDC #2

DisplayName            DisplayValue    RegistryKeyword         RegistryValue
-----------            ------------    ---------------         -------------
SR-IOV VPorts          0               *NumVFs                 0
VMQ VPorts             63              VMQueues                63
Receive Buffers        512             *ReceiveBuffers         512
Starting RSS CPU       0               *RssBaseProcNumber      0
Transmit Buffers       512             *TransmitBuffers        512
Low Latency Interrupts Disabled        EnableLLI               0
Profile                Custom Settings PerformanceProfile      1
Flow Control           Rx & Tx Enabled *FlowControl            3
Header Data Split      Disabled        *HeaderDataSplit        0
Interrupt Moderation   Enabled         *InterruptModeration    1
IPv4 Checksum Offload  Rx & Tx Enabled *IPChecksumOffloadIPv4  3
Jumbo Packet           Disabled        *JumboPacket            1514
Large Send Offload ...                 LSO                     1
Large Send Offload ... Enabled         *LsoV2IPv6              1
Maximum Number of R... 16              *MaxRssProcessors       16
Preferred NUMA node    System Default  *NumaNodeId             65535
Maximum Number of R... 8 Queues        *NumRssQueues           8
NDIS QOS               Disabled        DCB                     0
Recv Segment Coales... Disabled        *RSCIPv4                0
Recv Segment Coales... Disabled        *RSCIPv6                0
Receive Side Scaling   Enabled         *RSS                    1
RSS load balancing ... NUMAScalingS... *RSSProfile             4
Speed & Duplex         Auto Negotia... *SpeedDuplex            0
SR-IOV                 Disabled        *SRIOV                  0
TCP Checksum Offloa... Rx & Tx Enabled *TCPChecksumOffloadIPv4 3
TCP Checksum Offloa... Rx & Tx Enabled *TCPChecksumOffloadIPv6 3
UDP Checksum Offloa... Rx & Tx Enabled *UDPChecksumOffloadIPv4 3
UDP Checksum Offloa... Rx & Tx Enabled *UDPChecksumOffloadIPv6 3
Virtual Machine Queues Enabled         *VMQ                    1
Wake on Magic Packet   Disabled        *WakeOnMagicPacket      0
Wake on Pattern Match  Enabled         *WakeOnPattern          1
Wake on Magic Packe... Disabled        EnablePME               0
Interrupt Moderatio... Adaptive        ITR                     65535
Log Link State Event   Enabled         LogLinkStateEvent       51
Wake on Link Settings  Disabled        WakeOnLink              0
Locally Administere... 246E96071AE0    NetworkAddress          246E96071AE0
Low Latency Interru... {}              LLIPorts                {}

 

 

But even after a reboot VMQ is not availible:

 

PS C:\Windows\system32> Get-NetAdapterVMQ

Name                           InterfaceDescription              Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
                                                                                                        Queues
----                           --------------------              ------- ---------------- ------------- ---------------
NIC4                           Intel(R) Gigabit 4P X540/I35...#2 False   0:0              8             0
NIC3                           Intel(R) Gigabit 4P X540/I350 ... False   0:0              8             0
NIC2                           TEAM: Team #0 - Intel(R) Eth...#2 False   0:0              16            0
NIC1                           TEAM: Team #0 - Intel(R) Ether... False   0:0              16            0

 

I've created a virtual Switch and bound it to the ANS Team NIC..

Anyone can point me to what I'm missing?

X710-DA4 - SR-IOV not working correctly

$
0
0

Host OS: Ubuntu 16.04 LTS

Hypervisor: Xen 4.6.0

Kernel version: 4.4.0-22-generic

Driver version: i40e-1.5.18

NVM version: 5.02

Guest OS: Windows Server 2012 R2

Guest Driver version: 20.7.1

 

I'm trying to forward the ports of the card to a guest Windows VM.  An example of forwarding just one port:

 

# echo 1 > /sys/class/net/ens2f0/device/sriov_numvfs

# /sbin/ip link set ens2f0 vf 0 mac 02:00:00:00:00:00

# xl pci-assignable-add 02:02.0

Added to Xen VM .cfg:

pci = ['0000:02:02.0']

 

All of that works and the VM starts fine and sees the card.  Xen reports that it added it:

 

[26326.586765] i40e 0000:02:00.0 ens2f0: adding 68:05:ca:33:0a:59 vid=0

[26326.586774] i40e 0000:02:00.0 ens2f0: adding 68:05:ca:33:0a:59 vid=1

[26326.595995] i40e 0000:02:00.0: Allocating 1 VFs.

[26326.697778] pci 0000:02:02.0: [8086:154c] type 00 class 0x020000

[26326.697973] pci 0000:02:02.0: Max Payload Size set to 256 (was 128, max 2048)

[26329.893582] i40e 0000:02:00.0: Setting MAC 02:00:00:00:00:00 on VF 0

[26329.965339] i40e 0000:02:00.0: Reload the VF driver to make this change effective.

[26366.200750] pciback 0000:02:02.0: seizing device

[26366.200815] pciback 0000:02:02.0: enabling device (0000 -> 0002)

[26472.867101] xen_pciback: vpci: 0000:02:02.0: assign to virtual slot 0

[26472.867643] pciback 0000:02:02.0: registering for 4

 

I install the drivers and Windows 'see' the card.  The problem is the reported state goes into a cycle:

 

Connected @ 10GB

Connected @ 40GB

Disconnected

 

When this happens, in dmesg on the host:

 

[26624.137415] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

[26635.075640] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

[26645.903762] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

[26656.669642] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

[26667.904191] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

[26679.013381] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

[26689.966331] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

[26701.028906] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

[26712.372668] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

[26723.154159] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

[26733.935320] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

[26744.685130] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

[26756.075844] i40e 0000:02:00.0: VF 0 failed opcode 11, retval: -10

 

This same issue happens using the built in kernel i40e and also i40e-1.5.18. Am I doing something wrong?


Adapter X710 DA2 and SFP+ LR Module does not working in link mode 1GB

$
0
0

HI!

I have the adapter X710  DA2 and X710  DA4, using LR SFP+ Module (Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT),

does not UP ethernet link in mode 1G between devices.

 

Also can notdump the settingsto the EEPROMof the SFP module in response to receiving

Cannot get module EEPROM information: Operation not supported

 

When you try to off auto-negotiationor changespeedto1000 alsohave

Cannot get current device settings: No such device

  not setting speed

  not setting duplex

  not setting autoneg

 

below information about

driver information:

driver: i40e

version: 1.5.16

firmware-version: 4.42 0x80001921 0.0.0

bus-info: 0000:0a:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

interface status :

p2p1      Link encap:Ethernet  HWaddr 3c:fd:fe:9d:40:38

          inet addr:xxx.xxx.xxx.xxx  Bcast:xxx.xxx.xxx.xxx  Mask:255.255.255.0

          inet6 addr: fe80::3efd:feff:fe9d:4038/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:4358 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 B)  TX bytes:728252 (728.2 KB)

 

Settings for p2p1:

 

        Supported ports: [ FIBRE ]

        Supported link modes:   1000baseT/Full

                                10000baseT/Full

        Supported pause frame use: Symmetric

        Supports auto-negotiation: No

        Advertised link modes:  10000baseT/Full

        Advertised pause frame use: No

        Advertised auto-negotiation: No

        Speed: 10000Mb/s

        Duplex: Full

        Port: FIBRE

        PHYAD: 0

        Transceiver: external

        Auto-negotiation: off

        Supports Wake-on: d

        Wake-on: d

        Current message level: 0x0000000f (15)

                              drv probe link timer

        Link detected: yes

 

Is it possible the work of these SFP+ when using the adapter X710 DA2 in 1GB mode?

If so,why do I havetheseconflictswith modules?

STAG support on XL710?

$
0
0

Does the Intel XL710 4x10GE card support the processing of VLAN frames that have are tagged with STAGs (TPID == 0x88a8) instead of CTAGs (TPID == 0x8100)?   In my lab, I have tried a number of things to enable the processing of STAG frames on the XL710 board, including setting up the ethernet interface with commands such as the one below, but no luck, the interface will still not see the VLAN frames with a TPID of 0x88a8.  Frames tagged with CTAGs are processed just fine however.

 

$ sudo ip link add link enp1s0f0 name enp1s0f0.4004 type vlan protocol 8021.ad id 4004

 

Also, I have tried to setup the board in both promiscuous mode and true promiscuous mode.  Again, no joy, the board still refuse to detect/receive a VLAN tagged frame if the TPID is 0x88a8 instead of 0x8100.  By the way, for true promiscuous mode, I am using this command.

 

$ sudo ethtool --set-priv-flags enp1s0f0 vf-true-promisc-support on

 

I have the latest Intel firmware installed on this board and am using the latest Intel kernel module for this board.

 

$ sudo ethtool -i enp1s0f0

[sudo] password for nknuth:

driver: i40e

version: 1.5.16

firmware-version: 5.04 0x800024ca 0.0.0

expansion-rom-version:

bus-info: 0000:01:00.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

Alas, when I query the features of this ethernet board, it seems clear that STAG frames are not supported.

 

$ sudoethtool -k enp1s0f0

...

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

...

 

Any help is appreciated.  It really is a show stopper for my company to not be able to process STAGs on the XL710 boards.  We were using the 2 port 10GE boards that use the ixgbe kernel module and STAG frames were processed on those boards without issue.

 

Thanks in advance for any clarity here.

 

Nathan

Bad Reset on I217 Linux Driver

$
0
0

I have a Dell Optiplex 9020 with an Intel i7-4790 processor and 32GB RAM. It is set up with dual-boot Microsoft Windows 7 Enterprise and Ubuntu 16.04 Linux.

If I start from power-down and boot into Linux, the I217-LM Ethernet chip works fine. Same if I boot Windows. But if I have booted into Windows and then reboot into Linux without powering-down, the Ethernet does not work. Linux is never able to connect to the network and get an IP address by using DHCP. In the upper-right corner of the screen there is an icon which spins around, showing that it is trying to connect. If I give the command "sudo dhclient -v eno1", I get this output:

 

Listening on LPF/eno1/98:90:96:db:2b:49

Sending on   LPF/eno1/98:90:96:db:2b:49

Sending on   Socket/fallback

DHCPDISCOVER on eno1 to 255.255.255.255 port 67 interval 3 (xid=0xaeaac17)

DHCPDISCOVER on eno1 to 255.255.255.255 port 67 interval 4 (xid=0xaeaac17)

 

This repeats until I type ctrl-C to kill the program.

 

If I reboot into Windows, the Ethernet works again. So it appears to me that at boot the Linux driver is not properly resetting the chip if it has already been used, but powering down does reset it. And, the Windows driver seems to set the chip correctly.

 

This problem occurred with the version 3.3.2 e1000e driver which came with Ubuntu. I downloaded and installed 3.3.4 from the Intel Web site, and it has the same problem. I did some Googling and it appears that others have seen this behavior for a while, but I don't know if it has ever been reported to Intel.

 

Thanks for looking into this.

Please help with receive setup of a 82574 based NIC

$
0
0

I am developing a Miniport device driver for an 82574l based NIC on Windows CE 5.0, using NDIS 5.1. Transmission setup works OK and the NIC transmits OK. I setup receive as the documentation dictates:

  1. Allocated a region of memory for legacy receive descriptor list using NdisMAllocateSharedMemory and setup a descriptor ring.
  2. Setup receive buffers of appropriate size and allocated and pointers to these buffers and stored these in each descriptor in the descriptor ring.
  3. Programed the descriptor base address with the address of the descriptor ring.
  4. Set the length register to the size of the descriptor ring.

However the when I process receive related interrupts the memory buffers pointed to by the memory allocations I got from NDIS point to regions filled with 0s. Using the diagnostic registers it shows that data was received and the Receive Data FIFO Packet Count register indicates how many packets were received and analyzing the RX FIFO PBM region seems good. So I must be doing something really bad when setting up my RX registers to not be able to handle RX packets. I will gladly provide my code to anybody who can help me.

What's the difference between the in-kernel ethernet drivers and intel drivers like i40e, ixgbe, e1000e, igb?

$
0
0

We are in the process of build a new release for Seagate Linux OS. I'm now scratching my head whether to pick the in-kernel stock drivers or go with the latest intel drivers?

It's a CentOS based OS, so it's rpm based.

The target drivers that I'm looking at are: i40e, ixgbe, e1000e, igb

I have system with adapters that need either e1000e and/or igb/ixgbe, where as i40e would be needed for the HBA cards.

Please advise.

 

/home/kk

Viewing all 4405 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>