Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

I211/I217-V Windows 10 LACP teaming fails

$
0
0

Hello,

 

after the update to Windows 10 (x64, Build 10240) the creation of a teaming group (static or IEEE802.3ad) with a I211+I217-V NIC fails.

 

Drivers have been upgraded to the latest version available and multiple reinstallations with reboots din't help either. Whenever the group creation wizzard is used and a groupname (several tried), the adapters and LACP have been selected, a Windows pop-up appears to tell me group creation has failed.

However the Windows Device Manager shows a newly created "Intel Advanced Network Services Virtual Adapter", so some kind of configuration seems to get done.

Using Windows 7 SP1 x64 the exact same setup worked flawlessly for months, so Win10/the driver are the likely culprit.

 

Is anyone experiencing similar problems and/or is this a known bug? Feedback on this issue is greatly appreciated.

 

Thanks in advance!

 

Kind regards,

Famaku


82599EN repeats Link Up and Down

$
0
0

Hi all

 

We use ixgbe driver for 82599EN on linux. driver version is 4.1.2

diriver is downloaded at https://downloadcenter.intel.com/download/25463/Network-Adapter-Driver-for-PCI-E-Intel-10-Gigabit-Ethernet-Network-Connections-under-Linux-.

 

[test condition]:

  OS:Linux kernel 3.10.31-ltsi

  driver:Network Adapter Driver for PCI-E* Intel® 10 Gigabit Ethernet Network Connections under Linux* ver4.1.2

  device:Intel 82599EN

  PHY I/F:SFP module(FINISAR FTLX8573D3BTL) + MMF Fiber cable

  EEPROM data:82599EN_SFI_NO_MNG_4.40.bin(modiy MAC address)

[steps]:

  1.insmod mdio.ko

  2.insmod ixgbe.ko

  3.ifconfig eth2 up

  4.ifconfig eth2 192.168.2.100

  5.connect MMF cable to 10GbE Tester(SPIRENT C1).

 

[result]

<step2 result>

------------------------------------------------------------------------------

root@socfpga:/usr/fsim# insmod ixgbe.ko
ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 3.13.10-k
ixgbe: Copyright (c) 1999-2013 Intel Corporation.
ixgbe_probe
PCI: enabling device 0000:01:00.0 (0140 -> 0142)
PCI resource c0000000 + 00020000
ixgbe 0000:01:00.0: Multiqueue Disabled: Rx Queue count = 1, Tx Queue count = 1
ixgbe 0000:01:00.0: (PCI Express:5.0GT/s:Width x4) 00:a0:c9:12:34:56
ixgbe 0000:01:00.0: MAC: 2, PHY: 12, SFP+: 5, PBA No: FFFFFF-0FF
ixgbe 0000:01:00.0: PCI-Express bandwidth available for this card is not sufficient for optimal performance.
ixgbe 0000:01:00.0: For optimal performance a x8 PCI-Express slot is required.
ixgbe 0000:01:00.0: Intel(R) 10 Gigabit Network Connection
platform leds.7: Driver leds-gpio requests probe deferral

---------------------------------------------------------------------------------

 

<result after step4>

-----------------------------------------------------------------------------------------

root@socfpga:/usr/fsim# ifconfig eth2
eth2      Link encap:Ethernet  HWaddr 00:a0:c9:12:34:56
          inet addr:192.168.2.100  Bcast:192.168.2.255  Mask:255.255.255.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

------------------------------------------------------------------------------------------

 

<when MMF cable is connected to Tester>

---------------------------------------------------------------------------------------------

root@socfpga:/usr/fsim# ixgbe 0000:01:00.0 eth2: NIC Link is Up 10 Gbps, Flow Control: RX/TX
ixgbe 0000:01:00.0 eth2: NIC Link is Down
ixgbe 0000:01:00.0 eth2: NIC Link is Up 10 Gbps, Flow Control: RX/TX
ixgbe 0000:01:00.0 eth2: NIC Link is Down
ixgbe 0000:01:00.0 eth2: NIC Link is Up 10 Gbps, Flow Control: RX/TX
ixgbe 0000:01:00.0 eth2: NIC Link is Down
ixgbe 0000:01:00.0 eth2: NIC Link is Up 10 Gbps, Flow Control: RX/TX
ixgbe 0000:01:00.0 eth2: NIC Link is Down
ixgbe 0000:01:00.0 eth2: NIC Link is Up 10 Gbps, Flow Control: RX/TX
ixgbe 0000:01:00.0 eth2: NIC Link is Down

--------------------------------------------------------------------------------------------------

 

Our issue is that link up and down is repeated. How should we resolve this issue ?

 

BR,

Taira Suzuki

Intel Corporation 82598EB 10-Gigabit AF Dual Port Network Connection (rev 01) supports FCOE

$
0
0

Hi,

 

I have Intel X520 Card and i am trying to set it up open fcoe in Redhat 6.X host. I am unable to find wwpn number I am doubt whether my card supports is CNA capable that is it support fcoe or I need to load any drivers.

 

I have Intel x520 10 Gig card.

 

Below is the lspci output

 

0c:00.0 Ethernet controller: Intel Corporation 82598EB 10-Gigabit AF Dual Port Network Connection (rev 01)

0c:00.1 Ethernet controller: Intel Corporation 82598EB 10-Gigabit AF Dual Port Network Connection (rev 01)

0e:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

0e:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

 

Thanks

Packet Loss after Windows Updates

$
0
0

Hi *.

 

after recent Windows Updates, I experience about 20% packet loss of all network connection through a Windows Server 2008 R2 Hyper-V vm running on a Windows Server 2016 TP5 host. Were there any changes to the Intel drivers recently?

 

/Klaus

Where do I get the tool for initializing an empty NVM (Flash) for the 82574?

$
0
0

I've got a new design that has two 82574L controllers. The first is integrated into a Q7 1.2 module running an Atom processor. The second is a connected to via the PCIe to the Q7 module. I have an empty 4G of flash NVM setting beside it that keeps failing the checksum on boot. I've heard there is a tool for initializing the NVM that is seperate from the bootutil32. Anyone know where that program resides or directions on how to initialize the NVM that's already soldered on board?

VLMB ANS Team on Hyper-V 2012 R2 with VMQ / VMDq

$
0
0

Hi,

 

we just bought some new Dell Poweredge R730 with Intel Ethernet 10G 4P X540/I350 Network Adapter for using them Hyper-V 2012 R2.

 

According to this document it's possible to create a VLMB Team with Intel ANS Teaming and enable VMDq, so Hyper-V will make use of VMQ.

 

I installed the latest version of Intel PROSet, created a VMLB Team with Intel ANS and enabled virtual machine Queues:

 

PS C:\Windows\system32> Get-IntelNetAdapterSetting

   Name: TEAM: Team #0 - Intel(R) Ethernet 10G 4P X540/I350 rNDC

DisplayName            DisplayValue    RegistryKeyword         RegistryValue
-----------            ------------    ---------------         -------------
Low Latency Interrupts Disabled        EnableLLI               0
Profile                Custom Settings PerformanceProfile      1
Flow Control           Rx & Tx Enabled *FlowControl            3
Header Data Split      Disabled        *HeaderDataSplit        0
Interrupt Moderation   Enabled         *InterruptModeration    1
IPv4 Checksum Offload  Rx & Tx Enabled *IPChecksumOffloadIPv4  3
Jumbo Packet           Disabled        *JumboPacket            1514
Large Send Offload ...                 LSO                     1
Large Send Offload ... Enabled         *LsoV2IPv6              1
Maximum Number of R... 16              *MaxRssProcessors       16
Preferred NUMA node    System Default  *NumaNodeId             65535
Maximum Number of R... 8 Queues        *NumRssQueues           8
NDIS QOS               Disabled        DCB                     0
Recv Segment Coales... Disabled        *RSCIPv4                0
Recv Segment Coales... Disabled        *RSCIPv6                0
Receive Side Scaling   Enabled         *RSS                    1
RSS load balancing ... NUMAScalingS... *RSSProfile             4
Speed & Duplex         Auto Negotia... *SpeedDuplex            0
SR-IOV                 Disabled        *SRIOV                  0
TCP Checksum Offloa... Rx & Tx Enabled *TCPChecksumOffloadIPv4 3
TCP Checksum Offloa... Rx & Tx Enabled *TCPChecksumOffloadIPv6 3
UDP Checksum Offloa... Rx & Tx Enabled *UDPChecksumOffloadIPv4 3
UDP Checksum Offloa... Rx & Tx Enabled *UDPChecksumOffloadIPv6 3
Virtual Machine Queues Enabled         *VMQ                    1
Wake on Magic Packet   Disabled        *WakeOnMagicPacket      0
Wake on Pattern Match  Enabled         *WakeOnPattern          1
Wake on Magic Packe... Disabled        EnablePME               0
Interrupt Moderatio... Adaptive        ITR                     65535
Log Link State Event   Enabled         LogLinkStateEvent       51
Wake on Link Settings  Disabled        WakeOnLink              0
SR-IOV VPorts          0               *NumVFs                 0
VMQ VPorts             63              VMQueues                63
Receive Buffers        512             *ReceiveBuffers         512
Starting RSS CPU       0               *RssBaseProcNumber      0
Transmit Buffers       512             *TransmitBuffers        512
Low Latency Interru... {}              LLIPorts                {}
Locally Administere... 246E96071AE0    NetworkAddress          246E96071AE0

   Name: TEAM: Team #0 - Intel(R) Ethernet 10G 4P X540/I350 rNDC #2

DisplayName            DisplayValue    RegistryKeyword         RegistryValue
-----------            ------------    ---------------         -------------
SR-IOV VPorts          0               *NumVFs                 0
VMQ VPorts             63              VMQueues                63
Receive Buffers        512             *ReceiveBuffers         512
Starting RSS CPU       0               *RssBaseProcNumber      0
Transmit Buffers       512             *TransmitBuffers        512
Low Latency Interrupts Disabled        EnableLLI               0
Profile                Custom Settings PerformanceProfile      1
Flow Control           Rx & Tx Enabled *FlowControl            3
Header Data Split      Disabled        *HeaderDataSplit        0
Interrupt Moderation   Enabled         *InterruptModeration    1
IPv4 Checksum Offload  Rx & Tx Enabled *IPChecksumOffloadIPv4  3
Jumbo Packet           Disabled        *JumboPacket            1514
Large Send Offload ...                 LSO                     1
Large Send Offload ... Enabled         *LsoV2IPv6              1
Maximum Number of R... 16              *MaxRssProcessors       16
Preferred NUMA node    System Default  *NumaNodeId             65535
Maximum Number of R... 8 Queues        *NumRssQueues           8
NDIS QOS               Disabled        DCB                     0
Recv Segment Coales... Disabled        *RSCIPv4                0
Recv Segment Coales... Disabled        *RSCIPv6                0
Receive Side Scaling   Enabled         *RSS                    1
RSS load balancing ... NUMAScalingS... *RSSProfile             4
Speed & Duplex         Auto Negotia... *SpeedDuplex            0
SR-IOV                 Disabled        *SRIOV                  0
TCP Checksum Offloa... Rx & Tx Enabled *TCPChecksumOffloadIPv4 3
TCP Checksum Offloa... Rx & Tx Enabled *TCPChecksumOffloadIPv6 3
UDP Checksum Offloa... Rx & Tx Enabled *UDPChecksumOffloadIPv4 3
UDP Checksum Offloa... Rx & Tx Enabled *UDPChecksumOffloadIPv6 3
Virtual Machine Queues Enabled         *VMQ                    1
Wake on Magic Packet   Disabled        *WakeOnMagicPacket      0
Wake on Pattern Match  Enabled         *WakeOnPattern          1
Wake on Magic Packe... Disabled        EnablePME               0
Interrupt Moderatio... Adaptive        ITR                     65535
Log Link State Event   Enabled         LogLinkStateEvent       51
Wake on Link Settings  Disabled        WakeOnLink              0
Locally Administere... 246E96071AE0    NetworkAddress          246E96071AE0
Low Latency Interru... {}              LLIPorts                {}

 

 

But even after a reboot VMQ is not availible:

 

PS C:\Windows\system32> Get-NetAdapterVMQ

Name                           InterfaceDescription              Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
                                                                                                        Queues
----                           --------------------              ------- ---------------- ------------- ---------------
NIC4                           Intel(R) Gigabit 4P X540/I35...#2 False   0:0              8             0
NIC3                           Intel(R) Gigabit 4P X540/I350 ... False   0:0              8             0
NIC2                           TEAM: Team #0 - Intel(R) Eth...#2 False   0:0              16            0
NIC1                           TEAM: Team #0 - Intel(R) Ether... False   0:0              16            0

 

I've created a virtual Switch and bound it to the ANS Team NIC..

Anyone can point me to what I'm missing?

intel nic drivers 19.3 huge 6000+ dpc latency spike once a few secs

$
0
0

hi, i would like to report that the new intel nic drivers version 19.3 that just released recently has huge 6000+ dpc latency spike once in a few secs.

 

my specs

 

Intel(R) 82579LM Gigabit Network Connection

 

windows 7 SP1 32bit + lastest windows update

 

i downgrade to intel nic drivers previous version 19.1 and the problem is just gone.

Is there a Windows 7 driver for PRO/1000 PL Network Connection?

$
0
0

I have a Gateway M285 Tablet running Windows XP. The Windows 7 Upgrade Advisor states that the Intel PrO/1000 PL Network Connection adapter will need an updated driver from the manufacturer. Is such a driver available? I will use only Wireless to connect to the Internet. Can the Windows 7 driver for 3945ABG be used on its own for this adapter? (Yes, I know I'm a decade late, but, hey, the computer is working just fine except for the out-of-date browser [IE8].)

 

Thanks for your interest in this matter.


x710 disconnects with offloading enabled

$
0
0

Hello,

 

I have several 2012 R2 systems with x710 NICs that blue screen and get network disconnects in the event logs when we have offloading turned on. If we disable offloading, the system is stable. We've tried the Dell version of the drivers, as well as the Intel drivers (v20.7). It appears to be very similar to the issue in the thread x710-DA2 iSCSI Connections Keep Disconnecting .

Is anyone else having problems with offloading on these NICs?

82579LM slow to reconnect after sleep on windows 10 pro

$
0
0

Hello

I have a gigabyte mb GA-Q77M-D2H with the intel 82579LM Ethernet chip on board and when the system comes out of sleep state it takes a minute before the LAN starts up again.

 

System is Windows 10 64bit pro

i7-3770, 8gb ram and Samsung SSD.

 

From power off the system boots really quick and the LAN is working just fine, however put it into sleep state and Ethernet takes a minute or more to start up again after leaving sleep state. (Yellow LED blinking).

If I go into device manage and disable and then enable the 82579lm it connects immediately.

 

I have tweeked just about ever config option in the device options and BIOS that I think would affect the slow start up and nothing has helped.

I have also test with several different generation of the intel LAN drivers.

 

Interesting is that I have the same problem when I install Win 7 64bit pro in the system.

 

However and this is the bit that makes little sense I also have an ASUS P8Q77-M mother that is essentially the same as the gigabyte and it works just fine and connects immediately to the LAN after sleeping.

 

I even tried the ASUS drivers in the GB board and problem the same.

 

Any ideas please as I have tried everything I can think of :-)

 

Ian K.

I try to install windows 10 on a ISCSI disk but during the second phase of installation the ISCSI Disk become inaccessible during the boot.

$
0
0

I try to install windows 10 on a ISCSI disk but during the second phase of installation the ISCSI Disk become inaccessible during the boot.

I use the latest NDIS 4 driver version 21.0 Network card is X540-T2.

All other OS version its working fine.

Random BSOD in Windows Server 2008 R2 due to elr62x64.sys

$
0
0

Hi,

 

 

While running our application in windows 2008 R2 server some random BSOD are happening.

 

 

On analyzing the dump it points to a call from elr62x64.sys and then fails in tcpip.sys. Since tcpip.sys is a system file, we would like to understand if there is any known bug from NIC card driver.

the NIC card details:

 

 

Intel R Gigabit 2P I350-t Adapter.

 

 

The current driver version  details in the server machine where BSOD's surfaced :

1) erl62x64.sys - 12.13.27.0 (Date : 6/5/2015)

2) tcpip.sys - 6.1.7601.22950 (Date: 2/5/2015)

3) NDIS.sys - 6.1.7601.23235 (Date : 10/13/2015)

4) ad.sys - 6.1.7601.23237 (Date: 10/14/2015)

 

 

Also the call stack obtained from dump file is as follows:

STACK_TEXT: 

fffff800`03650158 fffff800`01e80e69 : 00000000`0000000a 00000000`00000000 00000000`00000002 00000000`00000000 : nt!KeBugCheckEx

fffff800`03650160 fffff800`01e7fae0 : 00000000`00000000 00000000`00000000 fffffa80`481a2000 fffff800`03650640 : nt!KiBugCheckDispatch+0x69

fffff800`036502a0 fffff880`01a6606b : 00000000`00000716 fffff800`03650640 fffffa80`2138dcf0 fffff800`03650640 : nt!KiPageFault+0x260

fffff800`03650430 fffff880`01a643c6 : 00000000`71f8da89 fffffa80`2aaced70 00000000`00000000 fffffa80`2aacee38 : tcpip!TcpSegmentTcbSend+0x17b

fffff800`03650530 fffff880`01a685d9 : fffffa80`2a9d5360 fffff880`01a8c89b fffffa80`1d81ad30 00000000`000025bc : tcpip!TcpBeginTcbSend+0xa66

fffff800`036507b0 fffff880`01a69450 : fffffa80`000000ba 00060000`00000006 00000000`00000000 00000000`00004800 : tcpip!TcpTcbSend+0x1d9

fffff800`03650a30 fffff880`01a681a8 : 00000000`00000000 00000000`00000000 fffff800`03650c50 fffff800`03650d30 : tcpip!TcpEnqueueTcbSendOlmNotifySendComplete+0xa0

fffff800`03650a60 fffff880`01a6836b : fffffa80`00000004 fffffa80`1cc0a924 00000000`00004800 fffff800`03650ba0 : tcpip!TcpEnqueueTcbSend+0x258

fffff800`03650b10 fffff800`01e8e1f8 : fffff800`03650bd8 00000000`00000000 fffffa80`1d1ed7b8 fffff800`03650bd8 : tcpip!TcpTlConnectionSendCalloutRoutine+0x1b

fffff800`03650b40 fffff880`01a6922a : fffff880`01a68350 fffffa80`1d81af20 00000000`00000502 fffff880`01a68e00 : nt!KeExpandKernelStackAndCalloutEx+0xd8

fffff800`03650c20 fffff880`0436b7be : fffffa80`3934f3e0 fffffa80`1fd537e0 fffffa80`1fd537e0 fffffa80`1fd537e0 : tcpip!TcpTlConnectionSend+0x7a

fffff800`03650c90 fffff880`0436f7b4 : fffffa80`1ffa4b50 fffff880`01a70000 fffffa80`389cc280 fffff800`03650f00 : afd!AfdTLTPacketsSend+0x47e

fffff800`03650e20 fffff880`0436fe83 : fffffa80`48c0b110 00000000`00000001 fffffa80`1c78a380 fffffa80`1ffa4cf8 : afd!AfdTPacketsSend+0x64

fffff800`03650e90 fffff880`01a4e9c0 : 00000000`00010000 fffffa80`1b4930a0 fffffa80`1f705e90 fffff800`03651158 : afd!AfdCommonRestartTPacketsSend+0x173

fffff800`03650ec0 fffff880`01a64fa8 : 00000000`00000000 fffff800`03651158 fffffa80`1b4930a0 fffff880`00000000 : tcpip!TcpCompleteTcbSend+0x40

fffff800`03650ef0 fffff880`01a6299a : fffffa80`2138dcf0 fffffa80`38854000 fffffa80`1b0a3400 fffffa80`1b6202a0 : tcpip!TcpTcbReceive+0x3ec

fffff800`03651000 fffff880`01a637fc : fffffa80`1cc457ac fffffa80`1b4930a0 fffffa80`1b4930a0 00000000`00000000 : tcpip!TcpMatchReceive+0x1fa

fffff800`03651150 fffff880`01a4667c : fffffa80`1b4946f0 00000000`00000000 fffffa80`00000004 fffffa80`1c745740 : tcpip!TcpPreValidatedReceive+0x49c

fffff800`03651220 fffff880`01a57712 : fffffa80`1c7c3010 fffffa80`1c783ba0 fffffa80`1c780006 00000000`00000006 : tcpip!IpFlcReceivePreValidatedPackets+0x5bc

fffff800`03651380 fffff800`01e8e1f8 : 00000000`00000000 00000000`00004800 fffff800`0200ecc0 00000000`00000000 : tcpip!FlReceiveNetBufferListChainCalloutRoutine+0xa2

fffff800`036513d0 fffff880`01a57e42 : fffff880`01a57670 fffff880`01a6959a 00000000`00000002 fffffa80`1b0c0d00 : nt!KeExpandKernelStackAndCalloutEx+0xd8

fffff800`036514b0 fffff880`00ee20eb : fffffa80`1c773010 00000000`00000000 fffffa80`1c6bd1a0 00000000`0000f800 : tcpip!FlReceiveNetBufferListChain+0xb2

fffff800`03651520 fffff880`00eabad6 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : NDIS!ndisMIndicateNetBufferListsToOpen+0xdb

fffff800`03651590 fffff880`00e25ac1 : fffffa80`1c6bd1a0 00000000`00000002 00000000`00000004 00000000`00000000 : NDIS!ndisMDispatchReceiveNetBufferLists+0x1d6

fffff800`03651a10 fffff880`05382f33 : fffffa80`481a2000 00000000`00000004 fffffa80`1a00caa0 fffffa80`481a2dc0 : NDIS!NdisMIndicateReceiveNetBufferLists+0xc1

fffff800`03651a60 fffff880`05383101 : 00000000`00000001 fffffa80`1a00caa0 fffffa80`1c6c3e20 fffffa80`481a2000 : e1r62x64+0x23f33

fffff800`03651aa0 fffff880`053746e6 : 00000000`00000000 fffffa80`481a2040 00000000`00008000 ffff821b`00000000 : e1r62x64+0x24101

fffff800`03651b20 fffff880`053744a0 : fffffa80`1b8b7730 ffff0001`00000000 ffff0001`00000000 fffff880`00eab488 : e1r62x64+0x156e6

fffff800`03651b90 fffff880`05376628 : 00000000`00000000 ffff0001`00000000 00000000`00000000 00000000`00000000 : e1r62x64+0x154a0

fffff800`03651c00 fffff880`00e25951 : 00000000`001bbf1e 00000000`00000000 00000000`00000000 fffff800`0364b080 : e1r62x64+0x17628

fffff800`03651c40 fffff800`01e8d1dc : fffffa80`1cbb9918 00000023`00000000 00000000`00000000 fffff800`02000e80 : NDIS!ndisInterruptDpc+0x151

fffff800`03651cd0 fffff800`01e795ca : fffff800`02000e80 fffff800`0200ecc0 00000000`00000000 fffff880`00e25800 : nt!KiRetireDpcList+0x1bc

fffff800`03651d80 00000000`00000000 : fffff800`03652000 fffff800`0364c000 fffff800`03651d40 00000000`00000000 : nt!KiIdleLoop+0x5a

 

 

Please let us know if we were using older version of driver which leads to this error. If so please specify where to get the latest.

 

 

Also from the windows event viewer we got event as Microsoft.WIndows.Kernal.General with event ID as 12 sometimes and 41 sometimes and even 1 sometimes.

 

 

thanks.

Muralidhar

Intel 1219 ethernet did not work woke up from sleep

$
0
0

I am using Gigabyte H170-H3DP motherboard with i5-6500 cpu, when the computer woke up from sleep, the intel 1219-V(2) ethernet card did't work, and changed to 10Mbps, could not receive any packet,until the next restart of the OS.

Using Win10 (Version1511) X64 with the latest update and Intel Network Adapter Driver V21.0 installed.

I tried to disable the power saving of the ethernet card, however it didn't work.

LAN2.PNG

LAN3.PNG

Intel Corporation 82598EB 10-Gigabit AF Dual Port Network Connection (rev 01) supports FCOE

$
0
0

Hi,

 

I have Intel X520 Card and i am trying to set it up open fcoe in Redhat 6.X host. I am unable to find wwpn number I am doubt whether my card supports is CNA capable that is it support fcoe or I need to load any drivers.

 

I have Intel x520 10 Gig card.

 

Below is the lspci output

 

0c:00.0 Ethernet controller: Intel Corporation 82598EB 10-Gigabit AF Dual Port Network Connection (rev 01)

0c:00.1 Ethernet controller: Intel Corporation 82598EB 10-Gigabit AF Dual Port Network Connection (rev 01)

0e:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

0e:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)

 

Thanks

I217 support removed from the UEFI Ethernet Driver

$
0
0

I tried using the 20.7 UEFI Ethernet Driver E7006X3.EFI, but I see that I217 support has been removed.  Why? 

 

I need I217-LM support in this driver.

 

Going back I see the last time it was in the Driver was version 20.0, E6604X3.EFI.

 

Can I217 support please be added back into future builds?

 

Thank You


Cannot control XL710 VF interrupte rate

$
0
0


Hi,

 

I'm doing the tuning job for XL710 VF on a FreeBSD10 VM (driver is ixlv-1.2.11) hosted on a kvm-hypervisor server (pf driver is i40e-1.4.25).

I found for each queue of the vf the interrupte rate is very high (>10000/sec).

As said in the datasheet, I tried to set I40E_VF_ITRN by the VF driver and set I40E_VPINT_RATEN by the PF driver. But this didn't work.

I have upgraded the NVM to 5.02 the latest version.

 

Could anybody help me resolve the issue?

Intel X550-T2 Card not compatible to HP 1950-24G-2SFP+-2XGT-PoE+ Switch

$
0
0

Hi,i have the following issue. i am using a workstation equipped with Intel X550-T2 card and connected both 10Gbase-T ports with a cat7 cable to switch-port 27/28. switch-port 27/28 are also 10Gbase-T capable.

only one port on the intel card will connect at full speed (10Gbps). the second port will run with 1Gbps. I tried these things:

- build up a lan-team on workstation-side in windows server 2012

- forced card- and switch-ports to 10Gbase

- build a aggregation with both switch-ports

nothing really helped. is there a compatibility problem between card and switch ?

i tried also to active and deactivate stp. i dont understand why only one port run with full speed.

i got a supermicro-mainboard based server in my office as well, two intel x550 onboard ports and the same problem occur. one port is at full speed the second port is at 100Mbps.

XL710-QDA2: TCP rx_crc_errors

$
0
0

We already got working card with same chip (Selecom) and decided to purchase another one - but Intel branded XL710-QDA2.

After deploying and upgrading firmware and driver to the latests versions, we found that 40gbit/s link is up and icmp (pings) packets go perfectly, but there is no ability to set up any TCP connection.

Ethtool shows a lot of rx_crc_errors on this interface.

We've tested it with two brand new Intel cables and got the same result.

 

What can be reason of this?

 

i40e version 1.5.18

nvm version 5.0.4

TX packets loss when frame size is multiple of 4 bytes under full load on 82599

$
0
0

Hello,

 

When run RFC2544 throughput test under full load(total 4 x 10GbE ports on 2 x 82599ES NICs on E5 2697v3 Server) with Spirent TestCenter, we increased frame size by 1 byte step from 64 bytes to 1518 bytes.

There is an issue as titled, TX packets loss when frame size is multiple of 4 bytes, the attached is a test report(from Page 11).

If any idea for this issue?

 

Thanks,

Michael

Please help with receive setup of a 82574 based NIC

$
0
0

I am developing a Miniport device driver for an 82574l based NIC on Windows CE 5.0, using NDIS 5.1. Transmission setup works OK and the NIC transmits OK. I setup receive as the documentation dictates:

  1. Allocated a region of memory for legacy receive descriptor list using NdisMAllocateSharedMemory and setup a descriptor ring.
  2. Setup receive buffers of appropriate size and allocated and pointers to these buffers and stored these in each descriptor in the descriptor ring.
  3. Programed the descriptor base address with the address of the descriptor ring.
  4. Set the length register to the size of the descriptor ring.

However the when I process receive related interrupts the memory buffers pointed to by the memory allocations I got from NDIS point to regions filled with 0s. Using the diagnostic registers it shows that data was received and the Receive Data FIFO Packet Count register indicates how many packets were received and analyzing the RX FIFO PBM region seems good. So I must be doing something really bad when setting up my RX registers to not be able to handle RX packets. I will gladly provide my code to anybody who can help me.

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>