Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

intel nic drivers 19.3 huge 6000+ dpc latency spike once a few secs

$
0
0

hi, i would like to report that the new intel nic drivers version 19.3 that just released recently has huge 6000+ dpc latency spike once in a few secs.

 

my specs

 

Intel(R) 82579LM Gigabit Network Connection

 

windows 7 SP1 32bit + lastest windows update

 

i downgrade to intel nic drivers previous version 19.1 and the problem is just gone.


I211 Gigabit Network adapter shows as Removable Device

$
0
0

I'm using a Gigabyte AX370A motherboard with an integrated Intel GbE LAN with Windows 10 64 bit.

 

The adapter works, but its appears in the system tray as a removable device. Its also listed under 'Unspecified' in the Devices and Printers page. I installed the drivers for it from Gigabyte's site first, then tried the latest release directly from Intel - no change.

X710 L2 filter packet steering

$
0
0

Hello,

I am trying to  use the x710 MAC/VLAN filtering option to steer packets to receive queues based on strict MAC address match.

I need to filter and steer around 100 MAC addresses to 8 receive queues.  Because of the development environment constraints I cannot use

SRIOV, IOV, VEB or flex partitioning.

 

My configuration is as follows:

Default simple switch configuration

One PF and one VSI with 8 queues

RSS, Ether type and Flow director filters are disabled and MAC/VLAN filter is  enabled.

I am  setting toQueue flag to steer packets to the appropriate queue.

 

However I don't see packets getting steered to the desired queue. Packet steering is not happening and packets always  land  on the same queue.

Do I need to configure the switch with "Set Switch" command to get filtering work as described above?

How do I initialize the VSI to support the above flow.

Are there any other other configuration settings that need to be done to get the packets steered as described above?

 

Thanks,

Sony

Empty TCP payload caused by tx-scatter-gather in ixgbevf

$
0
0

We noticed that the TCP payload of some packets was empty (all zero) when transmitted by a virtual machine using SR-IOV an the ixgbevf driver.

This happens rarely, but when it happens, all retransmitted packets suffer from the same problem.

When we disable tx-scatter-gather on the sender, the problem never occurs.

 

ethtool -K eth0 sg off

 

In the following captures the transmitter is at the right side and the receiver at the left side. The receiving side is actually captured using a port mirror on the switch and an intermediate host to make the capture.

The TCP checksum at the transmitter is incorrect, but that's because of tx checksum offloading. The TCP checksum at the left (receiver) is what you would expect if the TCP payload would have been correct.

The TCP payload at the left only contains zeros.

corrupt tcp packet.png

 

In another capture, we noticed that the corrupt TCP payload appeared to contain references to kernel objects. This made us believe the issue was caused by pointing to the wrong location in memory.

packet with kernel object.png

 

System information:

Server HP ProLiant BL460c Gen8 with 32 GB RAM

Dual Intel(R) Xeon(R) CPU E5-2658 0 @ 2.10GHz

NIC: HP 560FLB based on Intel 82599

OS: CentOS release 6.8

Drivers ixgbe: 4.2.1-k and ixgbevf 2.12.1-k

 

The same issue was noticed on several servers including a ProLiant DL360p Gen8 with HP 2-port 561FLR-T based on Intel X540.

I211/I217-V Windows 10 LACP teaming fails

$
0
0

Hello,

 

after the update to Windows 10 (x64, Build 10240) the creation of a teaming group (static or IEEE802.3ad) with a I211+I217-V NIC fails.

 

Drivers have been upgraded to the latest version available and multiple reinstallations with reboots din't help either. Whenever the group creation wizzard is used and a groupname (several tried), the adapters and LACP have been selected, a Windows pop-up appears to tell me group creation has failed.

However the Windows Device Manager shows a newly created "Intel Advanced Network Services Virtual Adapter", so some kind of configuration seems to get done.

Using Windows 7 SP1 x64 the exact same setup worked flawlessly for months, so Win10/the driver are the likely culprit.

 

Is anyone experiencing similar problems and/or is this a known bug? Feedback on this issue is greatly appreciated.

 

Thanks in advance!

 

Kind regards,

Famaku

Issue with setting smp_affinity on ixgbe cards

$
0
0

Hi,

I am using a Dell PowerEdge R730 with Dual Xeon, each 22 cores, with 6 ixgbe compatible cards, on which I am running Linux with ixgbe driver version 4.4.0-k, using kernel versions both 4.7.10 and 4.9.6.
I am loading the ixgbe modules at boot time, bringing up the interfaces and setting smp_affinity to the cards, using the set_irq_affinity script, so all the possible RxTx IRQs are distributed between all the available cores.
The problem is that it happens, random, but quite often that the smp_affinity setting fails, and I need manually to re-run the script one or more times in order desired settings to be applied. There were also several occasions when the settings were not applied at all, and it took me several reboots to script to start working again.
The problem appears not only randomly as occurrence, but also at random NIC controllers, so I am excluding the possibility of failed HW, since I also changed NICs.

I added some debug messages to track the affinity setting in Linux kernel, and it turns out that most of the times when the setting fails the error that affinity setting function irq_do_set_affinity returns is EBUSY, but also sometimes it returns ENOSPC.

More investigation on the topic showed whenever EBUSY was returned the problem could be overcome with re-running the script. But if the error returned was ENOSPC, it takes several reboots for the problem to disappear.

In order to provide some more details on the system I am attaching two text files with the output of the modinfo of the ixgbe and lspci on the machine.

x710 SR-IOV problems

$
0
0

Hi all,

 

I have following baseline:

 

Dell R630 (2x14 core Xeon, 128GB RAM, 800GB SSD)

x710 4-port NIC, in 10Gbit mode

SUSE12SP1

Latest NIC firmware but default PF/VF drivers (came with OS, v1,3,4)

VF driver blacklisted on hypervisor

Setup according to official Intel and Suse documentation, KVM hypervisor

 

With test setup, single VM with single VF and untagged traffic, I could achieve basically line-rate numbers: with MTU 1500, there were about 770Kpps and BW of 9.4Gbps, achieved both for UDP and TCP traffic, with no packet drops. There is plenty of processing power, setup is nice and tidy and everything works as it should.

 

Production setup is a bit different: VM is using 3 VFs, one for each PF (4th PF is not being used). All VFs except first one use untagged traffic. First VF is passing two types of traffic: first one untagged (VLAN 119) and second one tagged (VLAN 1108). Tagging is done inside VM. Setup worked fine for some time, confirming test setup numbers. However, after some time following errors started to appear in hypervisor logs:

 

Mar  11 14:32:52 test_machine1 kernel: [10423.889924] i40e 0000:01:00.1: TXdriverissuedetected on VF 0

Mar  11 14:32:52 test_machine1 kernel: [10423.889925] i40e 0000:01:00.1: Too many MDD events on VF 0, disabled

 

And performance numbers became erratic: sometimes it worked perfectly, sometimes it did not. But most importantly, packet drops occured.

 

So, I've reinstalled everything (hypevisor and VMs), configured exactly as before using automated tools, but upgraded PF and VF drivers to latest ones (v2.0.19/v2.0.16). Errors in logs disappeared, but issue persists. Now I have this in logs:

 

2017-03-12T11:33:43.356014+01:00 test_machine1 kernel: [  420.439112] i40e 0000:01:00.1: Unable to add VLAN filter 0 for VF 0, error -22

2017-03-12T11:33:43.376009+01:00 test_machine1 kernel: [  420.459168] i40e 0000:01:00.0: Unable to add VLAN filter 0 for VF 0, error -22

2017-03-12T11:33:44.352009+01:00 test_machine1 kernel: [  421.435124] i40e 0000:01:00.2: Unable to add VLAN filter 0 for VF 0, error -22

 

I've increased VM CPU count number, VF ring sizes, turnet off VF spoofcheck in hypervisor, VM linux software buffers, VM netdev.budget kernel parameter (amount of CPU time assinged for NIC processing) etc. but situation remains the same. Sometimes works perfectly, other time it does not.

 

Can you please provide some insight? Since rx_dropped counter is increasing in VM, I am suspecting driver/VF issue.

Is there a way to handle this problem, without switching to untagged traffic?

 

 

 

Thank you in advance,

Ante

igb SR-IOV vf driver on FreeBSD strips VLAN tags

$
0
0

When I run FreeBSD as a KVM guest and assign it a vf from my 82576 card, the guest igb vf driver seems to strip vlan tags on incoming packets. If no packets are vlan tagged, they pass fine. Outgoing packets maintain vlan tags, but incoming packets have their vlan tags stripped. More information: https://forum.pfsense.org/index.php?topic=126742

 

The ixgbe had this bug as well, and it was apparently fixed https://lists.freebsd.org/pipermail/freebsd-bugs/2016-May/067788.html but the fix was never applied to the igb driver.

 

It seems the only solution would be to use a separate vf for each vlan I want the guest to see, and have the host handle vlan management, but this limits me to the number of vlans I can have and how many guests can use the NIC.

 

Is there any chance of having the ixgbe patch ported to igb?


When I try to flash EFI firmware on Intel Pro/1000 (both single and dual port) I get Error: Flash too small for the image

$
0
0

Hello,

I have downloaded both the 20.7.1 and 22.0.1 version of firmware update and tried to update the ROM from PXE to the EFI ROM.  Under both Linux and a UEFI shell, it fails every time.  This occurs both on a dual port Pro/1000 and a single port.  The Linux output is:

 

[root@testsystem BootUtil]# Linux_x64/bootutil64e -nic=2 -up=efi

 

 

Intel(R) Ethernet Flash Firmware Utility

BootUtil version 1.5.94.2

Copyright (C) 2003-2015 Intel Corporation

ERROR: Flash too small for the image

 

 

Port Network Address Location Series  WOL Flash Firmware                Version

==== =============== ======== ======= === ============================= =======

  1   001B21009F04     1:00.0 Gigabit YES PXE                           1.3.35

  2   0015172EE90F    49:00.0 Gigabit YES PXE                           1.3.35

 

 

 

How do I flash an EFI ROM on this cards?  I can flash the PXE ROM on them, but not the EFI ROM.

 

Thanks in advance,

Marc

X550 Driver for Windows 10 (10.0.14393) WOL (Wake on LAN) not working

$
0
0

When I install the driver ver 21.1 Download Intel® Network Adapter Driver for Windows® 10  I get a message

 

"Important Note:  Creating Intel® ANS teams and VLANs on Microsoft Windows® 10 is currently not supported.  As a result, when created, teams and VLANs do not pass traffic.  We expect that ANS will be supported on Microsoft Windows 10 client in a future release"

 

I dont need teaming etc...but once the driver is installed, I dont see WOL in the ADVANCED option  ???

 

Where is the WOL feature???

 

Thanks,

ULP enable/disable utility. Where to get?

$
0
0

Same issue as seen elsewhere with the i218-V on a dozen Lenovo e550. Rather than going to two remote locations and popping the cmos on the lot, I'd like to give the ULP enable/disable utility a try.

I217-V problems

$
0
0

Intel I217-V NIC on Gigabyte Z87MX-D3H motherboard running windows 10. I have been fighting this thing for months now.

 

Intermittently losing connection / disabling ethernet

 

Sometimes it come back on its own, sometimes it requires disabling and enabling the device from device manager. Most of the time it it fails to enable the device and comes back with code 10 in device manager.

 

Attempted fixes:

*Disabling and re-enabling device from BIOS

*Turning WOL off in BIOS

*Downloading latest Intel drivers

*Downloading drivers from Gigabyte

*Using drivers that come with windows 10

*Various tweaks on advanced tab for the device in device manager (both when working and not)

     -EEE (Efficient ethernet) off

     -Various offloading options off

     -Transmit and receive buffers to 2048

     -WOL off

     -Wait for link in all 3 positions

 

Additionally I have noticed that sometimes detailed options for the device show up in BIOS and sometimes they do not. Hence I believe that the problem is not only OS/Driver related, it may require re-flashing or otherwise resetting the device

 

The only thing I can think of that I would not be able to fix/try is disabling  the Ultra Low Power, I have seen others mention. This requires special utility as far as I remember.

 

As of writing this the problem is there, I have been trying for hours to make the connection come back with no results. I am not able to flash the device with BootUtil because it hasnt come back to life yet, although I will try that when and if it finally comes back to life.

 

If anyone has any suggestions PLEASE chime in, this is driving me crazy

XL710 Malicious Driver Detection Event Occured

$
0
0

Hello, I've got some abnormal event in XL710 like as below.

 

헤더 1

kernel: i40e 0000:03:00.0: Malicious Driver Detection event 0x02 on TX queue 12 PF number 0x00 VF number 0x00

kernel: i40e 0000:03:00.0: TX driver issue detected, PF reset issued

 

by upper msg, my XL710 NIC was reset, and then link down/up occured.

 

I want to know, what does means "Malicious Driver Detection in XL710",

And, What is trigging condition that situation?

 

Thank you

cannot to disable 64-bit BAR on Intel X710 adapter

$
0
0

Hi,

 

I want (actually, I need) to disable 64-bit BAR on a two-port 10-Gbit Intel X710 adapter (because I'm booting the server via iSCSI, and it's unable to boot because while autoconfiguring the IBA receives 64-bit addresses, and this it's unable to live with it). When I use BootUtil (last version available, 1.6.36.0, launched from FreeDOS since Linux version requires some modules I'm unable to get) for it it says:

 

[...]

WARNING: BootUtil detected an older
version of the device (location 129:0.0) NVM image that expected.
Please update the NVM image.

WARNING: BootUtil detected an older
version of the device (location 129:0.0) NVM image that expected.
Please update the NVM image.

Disabling 64-bit BAR support on port 3...
Error: Unsupported feature.

 

What am I doing wrong ? I have 4 ports listed by BootUtil, two are from on-board adapter, and two are from discrete XL710 network card, and ports 3 and 4 (device IDs 129:00.0 and 129::0.1) are exactly those. What is even most funny, when I ussue 'BootUtil.exe -ALL -64d', it's capable to disable 64-bit BARs on the on-board ports. In the same time the "Intel Boot Agent Application Notes for BIOS Engineers" clearly states:

 

c. Configure the Ethernet controller to only request 32-bit memory BAR addresses. All Intel PCIe Ethernet controllers have EEPROM settings that limit the device to advertising only 32-bit memory BAR support.

 

About the adapter: Linux reports it as (lspci -m)

 

81:00.0 "Ethernet Controller" "Intel Corporation" "Ethernet Controller X710 for 10GbE SFP+" -r01 "Intel Corporation" "Ethernet Converged Network Adapter X710-2"
81:00.1 "Ethernet Controller" "Intel Corporation" "Ethernet Controller X710 for 10GbE SFP+" -r01 "Intel Corporation" "Ethernet Converged Network Adapter X710"

 

and this means that it's Intel manufactured one, because the onboard ones say "Super Micro Computer Inc".

 

Now about this NVM message. I downloaded the latest NVM image and utility, version 5.05, and updated my adapters (the nvmupdate64e utility said there's no updates for the onboard ones, but updated the discrete ones), but the BootUtil still says that it doesn't like the version.

 

So I will really appreciate if someone will shed some light on this.

 

Thanks.

Voltage levels are not as per the specs of I210

$
0
0

Hi,

 

We are using Intel's Atom processor with Springville I210 controller as Ethernet PHY.

The PCIe port of Atom processor is connected to Springville to convert PCIe into MDI signals which are then connected to an Ethernet PHY switch.

 

We are using this link at a speed of 100Mbps.

 

Concern here is that, the voltage levels of MDI signals which are coming out from Springville is around 5V pk-pk. This is not as per the standard specs of MDI signals.

 

Can any one please suggest if any settings has to made to correct this,

 

Regards,

Pradyumna


Empty TCP payload caused by tx-scatter-gather in ixgbevf

$
0
0

We noticed that the TCP payload of some packets was empty (all zero) when transmitted by a virtual machine using SR-IOV an the ixgbevf driver.

This happens rarely, but when it happens, all retransmitted packets suffer from the same problem.

When we disable tx-scatter-gather on the sender, the problem never occurs.

 

ethtool -K eth0 sg off

 

In the following captures the transmitter is at the right side and the receiver at the left side. The receiving side is actually captured using a port mirror on the switch and an intermediate host to make the capture.

The TCP checksum at the transmitter is incorrect, but that's because of tx checksum offloading. The TCP checksum at the left (receiver) is what you would expect if the TCP payload would have been correct.

The TCP payload at the left only contains zeros.

corrupt tcp packet.png

 

In another capture, we noticed that the corrupt TCP payload appeared to contain references to kernel objects. This made us believe the issue was caused by pointing to the wrong location in memory.

packet with kernel object.png

 

System information:

Server HP ProLiant BL460c Gen8 with 32 GB RAM

Dual Intel(R) Xeon(R) CPU E5-2658 0 @ 2.10GHz

NIC: HP 560FLB based on Intel 82599

OS: CentOS release 6.8

Drivers ixgbe: 4.2.1-k and ixgbevf 2.12.1-k

 

The same issue was noticed on several servers including a ProLiant DL360p Gen8 with HP 2-port 561FLR-T based on Intel X540.

A bus fatal error was detected on a component at bus 0 device 0 function 0 With X710-DA4 FH Card. No error if this is removed.

$
0
0

While transferring data to/from other servers, Server suddenly crashes and I receive "A bus fatal error was detected on a component at bus 0 device 0 function 0" in the hardware logs. With X710-DA4 FH Card. No error if this card is removed. Firmware/drivers is current. Tried swapping with another card of same model and firmware and drivers. Also tried swapping with a X520 with same result. Dell support tried everything with regards to server hardware firmware and drivers. We also replaced the motherboard. The only thing left is that they are Intel cards. Removing the cards is only thing that works. This is happening on 2 of my Dell R610 servers at the same time. Both have the same cards and both are good without the cards. Is this card just simply not compatible? Anyone experience issue?

Win10 + I219-V + Intel ME + VLANs = BSOD

$
0
0

Yesterday I got an update for the Intel ME component for my MSI Z270 motherboard, and after I restarted the computer, it blue screened on me. I had to end up resetting the Windows install to be able to get access to the computer again, even while I had to download and reinstall all my apps (luckily I have all my files on a different drive).

 

I started my troubleshooting session. Downloaded all latest drivers for my setup, installed all but the Intel ME drivers, since it was the last thing I did before it broke it all. Ran the Windows update, reconfigured my VLANs, restarted a few times at drivers requests, with no issues. All was fine. Installed the ME drivers, restarted, BSOD. So yeah, I was on the right path.

 

Reset Windows one more time, installed Intel ME drivers first. Restarted the computer after each driver was getting installed, to find out which one was the ME driver getting upset with. All drivers installed, all was still fine, I felt confused. Ran Windows update, all still fine. Went configure my very first VLAN, BSOD. So I had a big hint.

 

I downloaded WinDBG and ran it against the memory.dmp files generated during the last BSODs and at all times the error was a memory corruption on the NDIS stack. Not much past that as by default Windows only dumps 200Mb of memory, and I had not changed to collect a full dump and force another crash (full dump sucks when you have 64Gb RAM).

 

The Intel ME driver 11.6.0.1036 caused this trouble, whereas with driver 11.6.0.1026 VLANs were working just fine. The only reason why I posted this report under Wired networks is because this is the component that is failing, not the ME - but I might either cross-post or have a mod move this thread there.

 

I am trying to get the previous ME driver 11.6.0.1026 with MSI to roll back and see if the VLAN will still work - I might have them take a look at this topic too. One end user reporting issues might not cause much attention, but a VAR reporting could take a very different route.

 

So if you happen to use a few advanced features and things got broken on you, hope this can help.

 

Other related drivers and product version:

Intel PROSet Version 22.0.18.0, with Windows driver version 12.15.23.9

Windows 10 Pro x64, Version 1067 Build 14393.

 

DUMP_CLASS: 1

DUMP_QUALIFIER: 401

BUILD_VERSION_STRING:  14393.953.amd64fre.rs1_release_inmarket.170303-1614

SYSTEM_MANUFACTURER:  MSI

SYSTEM_PRODUCT_NAME:  MS-7A69

SYSTEM_SKU:  Default string

SYSTEM_VERSION:  1.0

BIOS_VENDOR:  American Megatrends Inc.

BIOS_VERSION:  1.10

BIOS_DATE:  02/03/2017

BASEBOARD_MANUFACTURER:  MSI

BASEBOARD_PRODUCT:  Z270M MORTAR (MS-7A69)

BASEBOARD_VERSION:  1.0

DUMP_TYPE:  1

BUGCHECK_P1: 7

BUGCHECK_P2: d699

BUGCHECK_P3: b96acc4c

BUGCHECK_P4: ffffda87ad768208

POOL_ADDRESS:  ffffda87ad768208

FREED_POOL_TAG:  Filt

BUGCHECK_STR:  0xc2_7_Filt

CPU_COUNT: 4

CPU_MHZ: bb8

CPU_VENDOR:  GenuineIntel

CPU_FAMILY: 6

CPU_MODEL: 9e

CPU_STEPPING: 9

CPU_MICROCODE: 6,9e,9,0 (F,M,S,R)  SIG: 48'00000000 (cache) 48'00000000 (init)

DEFAULT_BUCKET_ID:  CODE_CORRUPTION

PROCESS_NAME:  System

CURRENT_IRQL:  2

ANALYSIS_SESSION_HOST:  HQUEST

ANALYSIS_SESSION_TIME:  03-19-2017 17:22:22.0774

ANALYSIS_VERSION: 10.0.14321.1024 amd64fre

LAST_CONTROL_TRANSFER:  from fffff803f9060cf3 to fffff803f8f657c0

STACK_TEXT: 
ffff8e80`5c558da8 fffff803`f9060cf3 : 00000000`000000c2 00000000`00000007 00000000`0000d699 00000000`b96acc4c : nt!KeBugCheckEx
ffff8e80`5c558db0 fffff80b`f261fbb5 : ffffda87`ad768208 00000000`0000009c ffff8e80`5c558f40 00000000`0000006a : nt!ExFreePool+0x1d3
ffff8e80`5c558e90 fffff80b`f24d8182 : ffff8e80`5c559240 ffff8e80`5c558f49 ffff8e80`5c558fd0 ffffda87`a8421260 : ndis!NdisFreeMemory+0x15
ffff8e80`5c558ec0 fffff80b`f24d2dc2 : ffffda87`99ce1a20 fffff80b`f262766f 00000000`00000000 ffffda87`98fb1c10 : iansw60e+0x8182
ffff8e80`5c558fb0 fffff80b`f262750d : ffffda87`98fb1c10 ffff8e80`5c559130 ffff8e80`5c559240 ffff8e80`5c559410 : iansw60e+0x2dc2
ffff8e80`5c559000 fffff80b`f26270c4 : 00000000`00000002 ffffda87`ad9481a0 ffffda87`ad9481a0 ffffda87`ad9481a0 : ndis!ndisInvokeStatus+0x39
ffff8e80`5c559030 fffff80b`f2626ab8 : ffffda87`ad9481a0 00000000`0000000c ffffda87`ad9481a0 00000000`00000000 : ndis!ndisIndicateStatusInternal+0x464
ffff8e80`5c559210 fffff80b`f26ba175 : ffffda87`ad9481a0 ffffda87`ad9481a0 00000000`00000000 ffffda87`a23fcc00 : ndis!ndisIndicateInitialStateToBinding+0x304
ffff8e80`5c559500 fffff80b`f26b3f29 : ffffb101`5d1c5d70 ffffb101`6816a278 ffffb101`6dd50740 ffffb101`6816a278 : ndis!ndisBindNdis6Protocol+0x541
ffff8e80`5c5597a0 fffff80b`f26b5c2d : ffffda87`ad9481a0 00000000`000013d0 00000000`000013e8 00000000`00000002 : ndis!ndisBindProtocol+0x55
ffff8e80`5c559880 fffff80b`f26b588c : ffff8e80`5c559ad0 ffffda87`ad9495f8 ffffda87`ad949630 ffff8e80`5c559ad0 : ndis!Ndis::BindEngine::Iterate+0x34d
ffff8e80`5c559a80 fffff80b`f26b567a : ffffda87`ad086800 ffffb101`6816a2b8 ffffda87`ad193dc0 00000000`00000000 : ndis!Ndis::BindEngine::UpdateBindings+0x44
ffff8e80`5c559ab0 fffff803`f8e7c6f9 : ffffda87`ad086800 ffffda87`ad949630 ffffda87`00000000 fffff803`00000000 : ndis!Ndis::BindEngine::UpdateBindingsWorkItem+0x3a
ffff8e80`5c559b00 fffff803`f8ecd2d5 : ffff8e80`57017180 00000000`00000080 ffffda87`98477040 ffffda87`ad086800 : nt!ExpWorkerThread+0xe9
ffff8e80`5c559b90 fffff803`f8f6ac86 : ffff8e80`57017180 ffffda87`ad086800 fffff803`f8ecd294 00310100`0d010900 : nt!PspSystemThreadStartup+0x41
ffff8e80`5c559be0 00000000`00000000 : ffff8e80`5c55a000 ffff8e80`5c553000 00000000`00000000 00000000`00000000 : nt!KiStartSystemThread+0x16

 

Problem with Installation of Intel PRO 1000 PT

$
0
0

Hi,

 

I have a problem with the installation of the Intel PRO 1000 PT dual port on a Win 10 PC.

When I open SetupDB.exe with administrative rights, it says that there are no Intel NICs present on the system.

Windows already displays the 2 adapters on the devmgmt.msc and installed the default drivers.

Although the adapter works with the default win drivers, I want to use the ANS options for teaming.

 

I downloaded the latest driver for Win 10 x64.

Is this a known issue or is there any work around for this?

 

Many thanks

 

Chris

Intel Pro 1000 PT Dual - Windows 10 x64 - ANS NIC Teaming

$
0
0

I am running Windows 10 x64 using a Intel Pro 1000 PT Dual Nic.  The latest 22.0.1 drivers say they have ANS support for NIC Teaming to finally work on Windows 10 with The Anniversary update.

 

When I go to configure the adapter I don't get he Nic teaming options in the Adapter Properties dialog menu.

 

Are the drivers not updating on my PC or are there no driver updates for this card?  Or does the 1000 PT dual not work with Teaming even if I had it installed and the teaming options available.

 

I have another Intel Nice on my computer I217-LM and it has the Teaming properties in the adapter properties dialog menu.  If i choose both the PT 1000 ports I get an error when trying to setup the Team.

 

Error: "Each team must include at least one Intel server device or Intel integrated connection that supports teaming."

 

Am I missing something here?

 

Thanks,

Brian

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>