Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

82579LM Inconsistent TCP Performance while receiving on Windows 7 Enterprise

$
0
0

While using iperf to testing TCP performance, we found win7 machines with 82579LM NIC has inconsistent TCP performance result(some time 400mb/s some time 900mb/s, no other results).

On some machines, we fixed this problem by disable "interrupt moderation" to "high", problem fixed-> consistent TCP perf test result.

On some machines, non of our efforts have any effects: disable tcp offload etc.

 

Both OS and NIC driver are update to the latest.

 

Anyone saw this issue?


Unable to establish link with Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection

$
0
0

Hi,

 

I'm looking for help establishing (and troubleshooting) a link to a SPAN in promiscuous mode on Ubuntu Server 14.04.

 

This link was up and running and seeing packets for a few weeks. To help with an application issue, we installed the PF_RING kernel module, and after a server reboot, we have not been able to establish a link. We've rolled back our changes and removed PF_RING, but we are still not able to establish a link, and I'm looking for some help.

 

We've looked at our Gigamon SPAN and it appears to be sending data to our device but we have not been able to see any traffic ('tcpdump -i em1 -vv' does not see any traffic).

 

Any help would be greatly appreciated.

 

Jeff

 

$ifconfig  em1

em1      Link encap:Ethernet  HWaddr ec:f4:bb:**:**:**

          UP BROADCAST NOARP PROMISC MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:7180 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

 

 

$ sudo ethtool em1

Settings for em1:

  Supported ports: [ FIBRE ]

  Supported link modes:   1000baseT/Full

                          10000baseT/Full

  Supported pause frame use: No

  Supports auto-negotiation: Yes

  Advertised link modes:  1000baseT/Full

                          10000baseT/Full

  Advertised pause frame use: No

  Advertised auto-negotiation: Yes

  Speed: Unknown!

  Duplex: Unknown! (255)

  Port: FIBRE

  PHYAD: 0

  Transceiver: external

  Auto-negotiation: on

  Supports Wake-on: umbg

  Wake-on: g

  Current message level: 0x00000007 (7)

        drv probe link

  Link detected: no

 

$ cat /var/log/syslog | grep ixgbe

Feb 17 15:16:46 servername kernel: [    7.371724] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 3.15.1-k

Feb 17 15:16:46 servername kernel: [    7.371727] ixgbe: Copyright (c) 1999-2013 Intel Corporation.

 

$ cat /var/log/syslog | grep em1

Feb 17 15:16:46 servername kernel: [  11.789614] systemd-udevd[572]: renamed network interface eth0 to em1

Feb 17 15:16:46 servername kernel: [  14.071614] IPv6: ADDRCONF(NETDEV_UP): em1: link is not ready

Feb 17 15:16:46 servername kernel: [  15.404429] ixgbe 0000:01:00.0: registered PHC device on em1

Feb 17 15:16:46 servername kernel: [  15.509280] IPv6: ADDRCONF(NETDEV_UP): em1: link is not ready

Feb 17 15:16:47 servername kernel: [  15.580413] ixgbe 0000:01:00.0 em1: detected SFP+: 6

Feb 17 15:16:48 servername kernel: [  16.713425] device em1 entered promiscuous mode

Feb 17 15:16:48 servername kernel: [  17.498677] ixgbe 0000:01:00.0 em1: detected SFP+: 6

$sudo ethtool -m em1

  Identifier                                : 0x03 (SFP)

  Extended identifier                      : 0x04 (GBIC/SFP defined by 2-wire interface ID)

  Connector                                : 0x07 (LC)

  Transceiver codes                        : 0x10 0x00 0x00 0x01 0x00 0x00 0x00 0x00

  Transceiver type                          : 10G Ethernet: 10G Base-SR

  Transceiver type                          : Ethernet: 1000BASE-SX

  Encoding                                  : 0x06 (64B/66B)

  BR, Nominal                              : 10300MBd

  Rate identifier                          : 0x02 (8/4/2G Rx Rate_Select only)

  Length (SMF,km)                          : 0km

  Length (SMF)                              : 0m

  Length (50um)                            : 80m

  Length (62.5um)                          : 30m

  Length (Copper)                          : 0m

  Length (OM3)                              : 300m

  Laser wavelength                          : 850nm

  Vendor name                              : Intel Corp     

  Vendor OUI                                : 00:1b:21

  Vendor PN                                : FTLX8571D3BCV-IT

  Vendor rev                                : A 

  Optical diagnostics support              : Yes

  Laser bias current                        : 26.446 mA

  Laser output power                        : 0.6639 mW / -1.78 dBm

  Receiver signal average optical power    : 0.0014 mW / -28.54 dBm

  Module temperature                        : 51.65 degrees C / 124.97 degrees F

  Module voltage                            : 3.3214 V

  Alarm/warning flags implemented          : Yes

  Laser bias current high alarm            : Off

  Laser bias current low alarm              : Off

  Laser bias current high warning          : Off

  Laser bias current low warning            : Off

  Laser output power high alarm            : Off

  Laser output power low alarm              : Off

  Laser output power high warning          : Off

  Laser output power low warning            : Off

  Module temperature high alarm            : Off

  Module temperature low alarm              : Off

  Module temperature high warning          : Off

  Module temperature low warning            : Off

  Module voltage high alarm                : Off

  Module voltage low alarm                  : Off

  Module voltage high warning              : Off

  Module voltage low warning                : Off

  Laser rx power high alarm                : Off

  Laser rx power low alarm                  : On

  Laser rx power high warning              : Off

  Laser rx power low warning                : On

  Laser bias current high alarm threshold  : 11.800 mA

  Laser bias current low alarm threshold    : 2.000 mA

  Laser bias current high warning threshold : 10.800 mA

  Laser bias current low warning threshold  : 3.000 mA

  Laser output power high alarm threshold  : 0.8318 mW / -0.80 dBm

  Laser output power low alarm threshold    : 0.1585 mW / -8.00 dBm

  Laser output power high warning threshold : 0.6607 mW / -1.80 dBm

  Laser output power low warning threshold  : 0.1995 mW / -7.00 dBm

  Module temperature high alarm threshold  : 78.00 degrees C / 172.40 degrees F

  Module temperature low alarm threshold    : -13.00 degrees C / 8.60 degrees F

  Module temperature high warning threshold : 73.00 degrees C / 163.40 degrees F

  Module temperature low warning threshold  : -8.00 degrees C / 17.60 degrees F

  Module voltage high alarm threshold      : 3.7000 V

  Module voltage low alarm threshold        : 2.9000 V

  Module voltage high warning threshold    : 3.6000 V

  Module voltage low warning threshold      : 3.0000 V

  Laser rx power high alarm threshold      : 1.0000 mW / 0.00 dBm

  Laser rx power low alarm threshold        : 0.0100 mW / -20.00 dBm

  Laser rx power high warning threshold    : 0.7943 mW / -1.00 dBm

  Laser rx power low warning threshold      : 0.0158 mW / -18.01 dBm

WGi218LM

$
0
0

Bonjour,

Je souhait avoir le schema ou la carte ( Orcad ex.) pour WGi218LM.

Cordialement, merci.

Keeping network settings after driver update?

$
0
0

Afternoon

 

Using proWin32.exe downloaded from the intel site,  I can update the network drivers on my machine(s)

 

However, when I do this both the network settings and the VLAN (I use Vlan taging at the local machine) are lost.

This means updating the drivers remotely is not possible....since as soon as the drivers are updated, the machine

drops off the network

 

Does anyone know of a way to persist the network config post-update, without having to visit each machine in turn...?

Intel® 82579LM UDP multicast packetloss

$
0
0

I currently have a high speed trading network setup and the machines that have the Intel® 82579LM integrated are experiencing insane packetloss. Started off with 2000 packets lost per 10 minutes. I have tried all the solutions suggested within the previous threads about this issue. I have updated the BIOS. The machines we use are HP - z220(230). Any machines that use a different model of intel NIC`s dont experience this problem. I have installed the latest drivers and Tweaked the power options on the nics down to nothing. I also increased the input and output buffer size to the maximum setting which does give me a smaller packet loss rate (70 packets per 10 min) but its still not good enough as this is a trading network we cannont afford packet loss. I have done extensive troubleshooting on our network and the CISCO switches we use. They are all in top shape . The issue is within the Intel® 82579LM nic itself and how it reacts to UDP multicast traffic.

 

Any suggestions i can implement to lessen the packet loss count on the network? with so many of those nics on the floor im dropping rougly 900000 packets a day it is way to much.

 

P.S. Just as an example the Intel I217-LM in the last 24 hours has dropped 15 packets comparing to the Intel® 82579LM dropping roughly 9-10k packets per 24 hours.

XL710 firmware do not work?

$
0
0

Hi, All,

 

     I assign XL710 VF to QEMU, and load modified i40evf driver in QEMU, then I encounter the following error in the qemu:

 

               "Admin queue command never completed"

 

     and get the following error log in host OS system:

 

[ 7823.295061] pci 0000:03:02.2: enabling device (0000 -> 0002)

[ 7823.295959] i40e 0000:03:00.0: VF 2 assigned LAN VSI index 5, VSI id 8

[ 7823.446495] pci 0000:03:02.2: kvm assign device

[ 7826.579576] i40e 0000:03:00.0: VF 2 assigned LAN VSI index 5, VSI id 8

[ 7827.024165] i40e 0000:03:00.0: VF 2 assigned LAN VSI index 5, VSI id 8

[ 7842.719176] dmar: DRHD: handling fault status reg 102

[ 7842.719188] dmar: DMAR:[DMA Read] Request device [03:02.2] fault addr 6551000

DMAR:[fault reason 06] PTE Read access is not set

[ 7843.628420] i40e 0000:03:00.0: ARQ VF Error detected

[ 7843.628426] i40e 0000:03:00.0: ASQ VF Error detected

[ 7847.424745] i40e 0000:03:00.0: VF 2 assigned LAN VSI index 5, VSI id 8

[ 7847.525135] i40e 0000:03:00.0: VF 2 assigned LAN VSI index 5, VSI id 8

 

     Then I unload and reload i40e driver in host OS system and encouter the following error:

 

[ 7856.803922] i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 0.4.21-k

[ 7856.803925] i40e: Copyright (c) 2013 - 2014 Intel Corporation.

[ 7856.812193] i40e 0000:03:00.0: f0.0 a0.0 n04.24 e800013fd

[ 7856.812196] i40e 0000:03:00.0: The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.

[ 7857.034526] i40e 0000:03:00.0: FCoE capability is disabled

[ 7857.034533] i40e 0000:03:00.0: configure_lan_hmc failed: -49

[ 7857.034679] i40e: probe of 0000:03:00.0 failed with error -2

 

   I debug the i40e driver code and found that the driver call i40evf_asq_send_command to get firmware and api version but the firmware did not write back any information (so the firmware and api version in the above message are both 0). After rebooting the host system, it works again. This problem happens from time to time, and I can't reproduce it at will.

   Before unload and reload the i40e driver, I give the EMP reset command by execute "echo empr > /sys/kernel/debug/i40e/0000\:03\:00.0/command", but it did not fix the problem.

   Dose anyone know why this happen and how to recover from the problem ? Thanks in advance!

 

  I run on ubuntu 14.10 x86-64 with updated kernel 3.17.6-031706-generic.

I210-T1 / I350-T2 power consumption?

$
0
0

Hi,

 

i want to know the power consumption of the I210-T1 and I350-T2 adapters cause I'm not sure

if it's right what the datasheed says.

 

Intel writes 0.81 Watt for the "Intel Ethernet Server Adapter I210-T1" but the "Intel Ethernet Server Adapter I350-T2" needs 4.40 Watt?

And then there is from HP the "E0X95AA Intel Ethernet I210-T1 GbE" where HP writes 3.00 Watt.

I believe the Intel and HP cards are nearly the same, so why that big difference.

Also i can use 5 "Intel Ethernet Server Adapter I210-T1" and still have lower power consumption then with 1 I350-T2?

Intel i350 NIC Teaming Question

$
0
0

Hi

 

I have a question surrounding the native NIC Teaming in Windows Server 2012 R2 with regards to the i350 Quad Port adapters.

 

In our Hyper-V implementation, we have 3 x quad port NICs per Hyper-V Cluster node.

 

With two of these adapters, we have balanced a Switch Independent / Dynamic (Sum of Queues mode) team across 4 of these ports for our VM Switch. Here's the thing... two of these ports are on one physical adapter and the other two ports are in the second physical adapter. Each port has VMQ enabled with suitable processor core allocations for a Sum of Queues team. VMQ is required as the team is connected to a VM Switch which is then supporting a reasonable number of Virtual Machines.

 

The remaining 4 ports in these two adapters (2 in one card and 2 in the other) are then allocated to iSCSI MPIO use for access to our SAN. These remaining ports have RSS enabled.

 

The questions I have are as follows:

 

A) What is Intels Stance when mixing RSS and VMQ modes on different ports, on the same physical adapter?

 

B) Is splitting a VMQ enabled NIC team between ports from physical NICs supported by Intel when using the native Windows NIC Teaming in Server 2012 R2?

 

We are seeing intermittent latency issues and packet loss from within virtual machines with VMQ enabled. Disabling VMQ does work around the issue, however this isn't really a solution.

 

Are there any known issues with these adapters when using Virtual Machine Queues?

 

Kind Regards

 

Matt


Linux ixgbe and multiq/mqprio

$
0
0

We have been using the Intel X520 series for a couple of years now, and have seen an evolution in the way PFC is handled by the driver and Linux. But with our cards, we are unable to get this working in RHEL 6.4/6.5.

We want to give traffic to different destination IP's different PCP values(from the same application). The oldest implementation we used was based on a multiq qdisc and then steer traffic to different queues with tc filters action skbedit queue_mapping. This only worked in the beginning(2.6.18 kernels), later we had to adjust the 'action queue_mapping' to 'action skbedit'. We still enable PFC(DCB) using DCBtool like we alyways did.

A big difference that I see is the number of enabled queues. First, only 8 queues were activated, but in RHEL 6.4/6.5 71 queues are activated. In some configurations, the switch even sends PFC pause frames, but these are not honored by the card resulting in packet loss.

I have also installed RHEL7, which has the same issues with multiq. But on RHEL 7, I can enable the mqprio qdisc(which exists in RHEL 6.5, but I can't enable it with hardware support so it's useless). Then with mqprio, the tc filtering doesn't work anymore, and the only option I currently have is to set the SO_PRIORITY of the socket in the application(which is not really an option). When this is set, PFC starts to work again and I don't have packet-loss.

Is there any way to have this working again in RHEL 6.4/6.5/(6.6) and configurable in RHEL 7?

intel nic drivers 19.3 huge 6000+ dpc latency spike once a few secs

$
0
0

hi, i would like to report that the new intel nic drivers version 19.3 that just released recently has huge 6000+ dpc latency spike once in a few secs.

 

my specs

 

Intel(R) 82579LM Gigabit Network Connection

 

windows 7 SP1 32bit + lastest windows update

 

i downgrade to intel nic drivers previous version 19.1 and the problem is just gone.

i218-LM code 10 after restart without cable

$
0
0

I have a system using an Intel i218-LM ethernet controller:

  • Fully patched (as of 16th Feb 2015) Windows 8.1 64-bit
  • Intel driver PROWinx64.exe, Version:19.5, Date:28th Oct 2014.  Appears in Windows as driver v12.12.80.19 dated 29th Sept 2014.

 

Under normal operation the LAN works correctly.  I can disconnect and re-connect the LAN cable, and each time the LAN connection is re-established.

 

If I perform a system restart with the LAN cable connected, the LAN is established correctly once Windows boots.

 

However, if I perform a system restart without a LAN cable connected then I get a Code-10 error in Windows under Device-Manager for the i218-LM controller.  When I re-connect the LAN cable obviously the connection isn't re-established due to the Code-10 error.  If within Device-Manager I disable and then re-enable the i218-LM, the Code-10 error goes away, and the connection is correctly established.

 

Any ideas?

Intel Ethernet Connection I217-LM (WinPE drivers)

$
0
0

Hi everyone,

 

We purchased a few DELL Optiplex 9020s that came with an Intel Ethernet Connection I217-LM ethernet controller, and I've been unable to find the correct WinPE x64 drivers for this adapter.

From the Download Center I found what I thought were the correct drivers. In the EXE, I found some WinPE drivers, but apparently these aren't it as I'm not able to connect to my network when I boot up using WinPE.

 

Can anyone point me in the right direction to get the correct drivers please?

 

Thanks!

Regarding Hard time stamping in Intel NIC

$
0
0

Hi,

 

I am Kuralamudhan Ramakrishnan working in NetApp, Munich. I am currently working in a project to get hardware time stamping between source and destination server using Linux time stamping options. I am using i40e (Driver: Intel(R) Ethernet Connection XL710 Network Driver - version 1.0.11-k) Intel Network Interface Controller. The linux 3.18 support bytestream time stamping, that is Transmit Hardware Timestamping and it doesn't support the Receive Hardware Time stamping for TCP socket. I like to know the vendor specific API's which could support the Hardware timestamping for the TCP packets(like i40e_ptp_gettime).

 

My NIC support the following time stamp capabilities.

Capabilities:

hardware-transmit     (SOF_TIMESTAMPING_TX_HARDWARE)

software-transmit     (SOF_TIMESTAMPING_TX_SOFTWARE)

hardware-receive      (SOF_TIMESTAMPING_RX_HARDWARE)

software-receive      (SOF_TIMESTAMPING_RX_SOFTWARE)

software-system-clock (SOF_TIMESTAMPING_SOFTWARE)

hardware-raw-clock    (SOF_TIMESTAMPING_RAW_HARDWARE)

PTP Hardware Clock: 2

Hardware Transmit Timestamp Modes:

off                   (HWTSTAMP_TX_OFF)

on                    (HWTSTAMP_TX_ON)

Hardware Receive Filter Modes:

none                  (HWTSTAMP_FILTER_NONE)

ptpv1-l4-sync         (HWTSTAMP_FILTER_PTP_V1_L4_SYNC)

ptpv1-l4-delay-req    (HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ)

ptpv2-l4-event        (HWTSTAMP_FILTER_PTP_V2_L4_EVENT)

ptpv2-l4-sync         (HWTSTAMP_FILTER_PTP_V2_L4_SYNC)

ptpv2-l4-delay-req    (HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ)

ptpv2-l2-event        (HWTSTAMP_FILTER_PTP_V2_L2_EVENT)

ptpv2-l2-sync         (HWTSTAMP_FILTER_PTP_V2_L2_SYNC)

ptpv2-l2-delay-req    (HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ)

ptpv2-event           (HWTSTAMP_FILTER_PTP_V2_EVENT)

ptpv2-sync            (HWTSTAMP_FILTER_PTP_V2_SYNC)

ptpv2-delay-req       (HWTSTAMP_FILTER_PTP_V2_DELAY_REQ)

 

I like to know whether there is any other Intel NIC which have the capability for SOF_TIMESTAMPING_TX_ACK, I require this time stamping to calculate the precise RTT measurement between source and destination.

Intel® Ethernet Converged Network Adapter X710-DA4

$
0
0

Hello,

 

We have just purchased 8 Intel® Ethernet Converged Network Adapter X710-DA4 and we have installed them in our Dell R610 VMware 5.5 hosts. The cards at first seem to work well and preform well however we now have an issue where the links on all of our hots flap. The nic resets one port at random time intervals and then comes right back into service. We see the same behavior on all of our ESXi Hosts. These cards are being connected to a Cisco Nexus 5700 switch using Twinax Copper Cables. We talked to VMware and they seem to think its a firmware issue or something on the card. Both Dell and Cisco have looked at the server and network hardware and do not see any issues. Does anyone have any suggestions?

 

Thanks

 

 

Intel x710

PCI passthrough of Ethernet Controller XL710 for 40GbE QSFP+

$
0
0

Has anyone succesfully used PCI passthrough for the Intel 40G interface?

I am trying this on Openstack/KVM. The device is passed through but data transfer fails.

In the same setup, PCI passthrough of Intel 10G ethernet interfaces works just fine.(82599ES 10-Gigabit SFI/SFP+ Network Connection)


I have a problem with my internet keeps cutting out. I've done just about every fix going for the last week: renewed cable, green ethernet disabled reinstalled windows, reinstalled drivers etc etc, any thoughts would be appreciated. Thanks.

$
0
0

Hi ,have been having problems with internet cutting out. I've done just about every fix in the book for the last week with no success. Attempted fixes: cable renewed, drivers & windows reinstalled, green ethernet disabled, among a few others, any thoughts geatly appreciated, thanks.

i350-T4 Windows Server 2012 R2 VMQ blue screens during live migration

$
0
0

i350-T4 NIC

Windows Server 2012 R2

All 4 i350 ports configured as a Windows LBFO Team (switch independent / dynamic load balancing)

Converged networking (HyperV vSwitch bound to the LBFO team, with vNICs configured on the vSwitch for Host OS operations (Management, Cluster/CSV. and Live Migration)

VLAN tagging in use on VM's and vNICs except the vNIC used for management which is 'native'

VMQ enabled on all i350 ports

SR-IOV disabled on all i350 ports

Server 2012 R2 HyperV cluster

Fully patched with update rollups and hotfixes currently available

Drivers 19.3 (latest from intel website)

 

In the above configuration the destination server blue screens during live migration. I can sometimes get 1 live migration to work, but a second attempt to live migrate a different VM to the same destination host will cause the host to blue screen.

 

I can reproduce this issue very easily on any host in the cluster. They all have the same behaviour

 

If i disable VMQ then the issue stops

 

Also we dont see this issue with thie same hardware and same configuration using Server 2012 (non R2) though i note that the NIC driver is diferent on this Server 2012 (e1r63x64.sys on 2012 as opposed to e1r64x64.sys on 2012 R2)

 

crashdup analysis always shows the faulting driver as e1r64x64.sys

 

BugCheck 1E, {ffffffffc0000005, fffff802be6a2550, ffffd000575b3b58, ffffd000575b3360}

*** ERROR: Module load completed but symbols could not be loaded for e1r64x64.sys
Probably caused by : e1r64x64.sys ( e1r64x64+280e7 )

Followup: MachineOwner
---------

18: kd> !analyze -v
*******************************************************************************
*                                                                             *
*                        Bugcheck Analysis                                    *
*                                                                             *
*******************************************************************************

KMODE_EXCEPTION_NOT_HANDLED (1e)
This is a very common bugcheck.  Usually the exception address pinpoints
the driver/function that caused the problem.  Always note this address
as well as the link date of the driver/image that contains this address.
Arguments:
Arg1: ffffffffc0000005, The exception code that was not handled
Arg2: fffff802be6a2550, The address that the exception occurred at
Arg3: ffffd000575b3b58, Parameter 0 of the exception
Arg4: ffffd000575b3360, Parameter 1 of the exception

Debugging Details:
------------------


WRITE_ADDRESS: unable to get nt!MmNonPagedPoolStart
unable to get nt!MmSizeOfNonPagedPoolInBytes
ffffd000575b3360

EXCEPTION_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s.

FAULTING_IP:
nt!ExQueryDepthSList+0
fffff802`be6a2550 8b01            mov     eax,dword ptr [rcx]

EXCEPTION_PARAMETER1:  ffffd000575b3b58

EXCEPTION_PARAMETER2:  ffffd000575b3360

BUGCHECK_STR:  0x1E_c0000005

DEFAULT_BUCKET_ID:  WIN8_DRIVER_FAULT

PROCESS_NAME:  System

CURRENT_IRQL:  0

ANALYSIS_VERSION: 6.3.9600.17237 (debuggers(dbg).140716-0327) amd64fre

EXCEPTION_RECORD:  0000000000000001 -- (.exr 0x1)
Cannot read Exception record @ 0000000000000001

TRAP_FRAME:  ffffe800b6200000 -- (.trap 0xffffe800b6200000)
Unable to read trap frame at ffffe800`b6200000

LAST_CONTROL_TRANSFER:  from fffff802be7efefb to fffff802be768ca0

STACK_TEXT: 
ffffd000`575b2b38 fffff802`be7efefb : 00000000`0000001e ffffffff`c0000005 fffff802`be6a2550 ffffd000`575b3b58 : nt!KeBugCheckEx
ffffd000`575b2b40 fffff802`be779846 : 00000000`00000000 fffff800`35d0c991 ffffe800`b1172d02 ffffd000`575b2e29 : nt!KiFatalFilter+0x1f
ffffd000`575b2b80 fffff802`be757d56 : 00000000`00000000 fffff802`be6e19a6 ffffe000`516d3f90 00000000`00000000 : nt! ?? ::FNODOBFM::`string'+0x696
ffffd000`575b2bc0 fffff802`be7701ed : 00000000`00000000 ffffd000`575b2d60 ffffd000`575b3b58 ffffd000`575b2d60 : nt!_C_specific_handler+0x86
ffffd000`575b2c30 fffff802`be6fd3a5 : 00000000`00000001 fffff802`be615000 ffffd000`575b3b00 fffff800`00000000 : nt!RtlpExecuteHandlerForException+0xd
ffffd000`575b2c60 fffff802`be6fc25f : ffffd000`575b3b58 ffffd000`575b3860 ffffd000`575b3b58 ffffe800`b12ee480 : nt!RtlDispatchException+0x1a5
ffffd000`575b3330 fffff802`be7748c2 : 00000000`00000001 fffffa80`1b6de000 ffffe800`b6200000 00000000`00000000 : nt!KiDispatchException+0x61f
ffffd000`575b3a20 fffff802`be772dfe : 00000000`00000011 00000000`00000002 00000000`00000001 fffff802`be8a929a : nt!KiExceptionDispatch+0xc2
ffffd000`575b3c00 fffff802`be6a2550 : fffff800`35d04875 ffffe800`b0f3c870 ffffd000`575b3e00 ffffe000`517cd000 : nt!KiGeneralProtectionFault+0xfe
ffffd000`575b3d98 fffff800`35d04875 : ffffe800`b0f3c870 ffffd000`575b3e00 ffffe000`517cd000 00000000`00000000 : nt!ExQueryDepthSList
ffffd000`575b3da0 fffff800`372520e7 : ffffe000`517ce540 ffffe000`517cd000 ffffe800`b1496c60 00000000`00000000 : NDIS!NdisFreeNetBufferList+0xb5
ffffd000`575b3e20 fffff800`372528a9 : ffffe000`517ce540 ffffe000`517cd000 00000000`00000001 00000000`00000000 : e1r64x64+0x280e7
ffffd000`575b3e50 fffff800`37252c00 : ffffe000`517ce540 00000000`00000001 00000000`00000000 ffffe000`517cd000 : e1r64x64+0x288a9
ffffd000`575b3e90 fffff800`37264a9d : ffffe000`517cd000 ffffe000`00000001 ffffe000`00000001 ffff0001`00000001 : e1r64x64+0x28c00
ffffd000`575b3ec0 fffff800`37261c7b : 00000000`00000000 ffffd000`575469a0 ffffe000`517cd000 00000000`00000000 : e1r64x64+0x3aa9d
ffffd000`575b3f00 fffff800`3725a909 : 00000000`00000002 00000000`00000000 ffffe000`517cd000 ffffd000`575469a0 : e1r64x64+0x37c7b
ffffd000`575b3f50 fffff800`3725b02b : ffffe800`b528cde0 fffff800`35d04671 ffffd000`575b40f0 ffffe000`51105ad0 : e1r64x64+0x30909
ffffd000`575b3fc0 fffff800`35d8f0fa : ffffe800`b5b87868 ffffe800`b5b87858 ffffe800`b5b87854 ffffe800`b0d501a0 : e1r64x64+0x3102b
ffffd000`575b4030 fffff800`35d033a3 : ffffe800`b0d501a0 ffffd000`575b40e9 ffffe800`b5b87820 00000000`00000011 : NDIS!ndisMInvokeOidRequest+0x4e
ffffd000`575b4070 fffff800`35d04324 : 00000000`00000000 ffffe800`b0d501a0 ffffe800`b5b87868 00000000`00000000 : NDIS!ndisMDoOidRequest+0x39b
ffffd000`575b4150 fffff800`35d0475e : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : NDIS!ndisQueueOidRequest+0x4c4
ffffd000`575b42f0 fffff800`3679719e : ffffe800`b147b8c0 00000000`00010224 ffffe800`b147b8c0 ffffe000`52bf4010 : NDIS!NdisFOidRequest+0xc2
ffffd000`575b43b0 fffff800`35d038de : ffffe800`b5b87820 ffffe000`51105ad0 00000000`00000000 ffffe000`52bea010 : wfplwfs!LwfLowerOidRequest+0x6e
ffffd000`575b43e0 fffff802`be6e19a6 : ffffd000`575b46d0 ffffd000`575af000 00000000`00000000 00000000`00000000 : NDIS!ndisFDoOidRequestInternal+0x2ee
ffffd000`575b44e0 fffff800`35d04131 : fffff800`35d035f0 ffffe000`52bea010 ffffe800`b1a0b400 00000000`00000000 : nt!KeExpandKernelStackAndCalloutInternal+0xe6
ffffd000`575b45d0 fffff800`35d03d27 : 00000000`00000102 ffffd000`53203200 00000000`00000000 ffffd000`575467d0 : NDIS!ndisQueueOidRequest+0x2d1
ffffd000`575b4770 fffff800`372ea204 : 00000000`00000120 ffffe000`516d4000 00000000`00000120 ffffe000`52bf5000 : NDIS!ndisMOidRequest+0x193
ffffd000`575b4880 fffff800`372e858d : ffffe000`5200ff00 ffffd000`00000001 ffffe000`52bf5020 ffffe800`b5b87820 : NdisImPlatform!implatDoOidRequestOnAdapter+0x22c
ffffd000`575b4900 fffff800`372ea32c : ffffe800`b1ae3880 fffff802`be6546c9 ffffe000`52bf5000 00000000`00000000 : NdisImPlatform!implatOidRequestInternal+0x1fd
ffffd000`575b4ac0 fffff802`be650f4a : ffffe800`b1b54ca0 ffffe000`52c10050 ffffe000`52c10050 fffff800`6977444e : NdisImPlatform!implatOidRequestWorkItem+0x24
ffffd000`575b4af0 fffff802`be651a2b : fffff800`362ed330 fffff802`be650ed4 ffffd000`575b4bd0 ffffe800`b1b54ca0 : nt!IopProcessWorkItem+0x76
ffffd000`575b4b50 fffff802`be6ee514 : 00000000`00000000 ffffe800`b1ae3880 ffffe800`b1ae3880 ffffe000`50832900 : nt!ExpWorkerThread+0x293
ffffd000`575b4c00 fffff802`be76f2c6 : ffffd000`55503180 ffffe800`b1ae3880 ffffd000`5550f7c0 00000014`00000006 : nt!PspSystemThreadStartup+0x58
ffffd000`575b4c60 00000000`00000000 : ffffd000`575b5000 ffffd000`575af000 00000000`00000000 00000000`00000000 : nt!KiStartSystemThread+0x16


STACK_COMMAND:  kb

FOLLOWUP_IP:
e1r64x64+280e7
fffff800`372520e7 813ddfd0030001000500 cmp dword ptr [e1r64x64+0x651d0 (fffff800`3728f1d0)],50001h

SYMBOL_STACK_INDEX:  b

SYMBOL_NAME:  e1r64x64+280e7

FOLLOWUP_NAME:  MachineOwner

MODULE_NAME: e1r64x64

IMAGE_NAME:  e1r64x64.sys

DEBUG_FLR_IMAGE_TIMESTAMP:  531f9173

FAILURE_BUCKET_ID:  0x1E_c0000005_e1r64x64+280e7

BUCKET_ID:  0x1E_c0000005_e1r64x64+280e7

ANALYSIS_SOURCE:  KM

FAILURE_ID_HASH_STRING:  km:0x1e_c0000005_e1r64x64+280e7

FAILURE_ID_HASH:  {6d380028-1764-7d25-d8c5-05559a475808}

 

So it seems that this intel driver has issues with VMQ.

VMQ is quite famous for NIC vendors and buggy drivers in Server 2012 R2

 

Disabling VMQ is not an option for us in production. We need it to work

Can anyone please confirm this issue exists on the latest 19.3 driver in Server 2012 R2?

Any idea when it will get fixed?

 

I'm shocked that such an awful bug would exist 12 months after launch of Server 2012 R2 on latest intel drivers for a technology that MS and Intel co-developed.

I would expect this kind of thing from Broadcom, i wouldnt expect it from Intel. Thats why we buy Intel

Perhaps we made a mistake there...

 

Help and comments appreciated

Intel(R) 82562V-2 10/100 Network Connection. my driver is from 2007 please help

$
0
0

I'm a gamer and this seems to be giving me some issue please help me find the update.

Intel(R) 82579V Gigabit Network device issues

$
0
0

Dear.

 

I have recently bought the new sandy bridge core i5 machine and been trying to install Win SBS 2008, but during the process, it asked me for the driver for the ethernet adaptor. I cannot find any whatever online or the driver CD. Can anyone help me to locate a Intel(R) 82579V Gigabit Network driver for Win SBS 2008 please?

 

Thanks a lot

Larry

opcode:IpLeaseRenewalFailed 0x79 issue.

$
0
0

We have Windows machine that requests DHCP lease very often. Can someone please suggest what could be the issue and how can this be resolved.The Intel card for instance is Intel(R) Ethernet connection l218M. Any help would be appreciated as there are couple of other machines on network causing same issue.

 

Thanks

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>