What installs it and is it relevant with Intel Network Connections 21.1.30.0? We are very security conscience and want to remove this software if it has no purpose. I am wondering if this was installed by an older version of Intel Network Connections and is no longer needed. Is this just a registry file that can be removed?
What is VC_CRT_x64 version 1.02.0000?
Why do I have to disable "Receive Side Scaling". NIC keeps disabling itself
I can't replicate the issue but every few weeks my NIC disables itself and I have to restart my computer to get it to work again since disabling / re enabling in Device Manager stalls.
From searching online, "Receive Side Scaling" is the reason. Why would this be the case? Would this be due to the manufacturer of the motherboard or Intel? I tried Asus' drivers and also tried the latest Intel drivers - doesn't matter.
Intel Gigabit CT Desktop + drivers (Win10)?
Howdy,
I've been wondering since I installed Windows 10 how to install the drivers (right now, 22.4.0.1; here) for Intel Gigabit CT Desktop adapter? I've tried several other versions, but all just say: "Cannot install drivers. No Intel(R) Adapters are present in this computer." I think this would install just fine with Windows 7, but not in Windows 10. Anyway, screenshot attached below.
MTBF for the 8391GT network adapter?
What is the MTBF for the 8391GT network adapter?
Drivers cant find lan adapter I219-V !!! Help please?
How to config cir for PF?
HI
I want to config cir (Committed Information Rate) for PF,and config eir (Excess Information Rate) for VF, how can i do this?
NIC: 82599
intel nic drivers 19.3 huge 6000+ dpc latency spike once a few secs
hi, i would like to report that the new intel nic drivers version 19.3 that just released recently has huge 6000+ dpc latency spike once in a few secs.
my specs
Intel(R) 82579LM Gigabit Network Connection
windows 7 SP1 32bit + lastest windows update
i downgrade to intel nic drivers previous version 19.1 and the problem is just gone.
Intel(R) Ethernet Connection (2) I219-V whats with the (2)?
I have an Asus Z170 Pro Gaming motherboard with the I219-V LAN port but even on a fresh Windows 7 or Windows 10 installation, the port always shows up as having the (2) prefix even though its the only LAN port available and its the first time drivers have been installed for it (for that OS installation after formatting the HDD). The driver version I'm currently using is 12.15.25.6 from earlier this year.
I'm pretty sure that at some stage I've seen it as simply Intel(R) Ethernet Connection I219-V and I'd like to get back to that but how? FWIW the hardware is working fine, I'm just fussy about my system configuration and would like this to be as originally intended.
Asus Maximus Viii Impact MB Ethernet Issue
Hi Everyone, Im trying to track down an issue with my LAN port,,My motherboard uses a Intel i219V controller and this is a new htpc mini itx build.
So far Ive looked over a few things such as downloading the latest drivers for the chipset and updating the bios on the asus board,,from the very first post
I have been skeptical of a hardware issue due to a lack of any port activity via the normal lights one expects to see from the ethernet port, in my limited experience
I usually see a light in or near the port no matter if it is active or not, this boards port has been silent of any lights so far,,however I thought i might run this by the
community here just in case someone else might have encountered a lan issue with this controller or a problem with Asus ROG products in respect to the network
controller, i am open to suggestions and have a fair amount experience with PC hardware and sometimes do repairs for friends and family.
any troubleshooting suggestions ? I hope maybe I would be so lucky as to have missed an option in the bios setup, but I havent looked into that yet,
Wireless on the asus board appears to work well and uses a qualcomm Atheros Dual band adapter and antenna. So far it performs well in any part of my home
but I prefer old school wired networking if at all possible,
This was motherboard number 2 as the first board came with some bent pins and had signs of being an open box,,this board however came packaged sealed and everything
checked out ok, first post went well after breadboarding it just to confirm it wasnt DOA,,anyone who has built into a small form factor case knows how much trouble this can save you.
So is the i219v have any current issues that might cause a ethernet port to appear dead ?
CT Desktop Adapter has detected an internal error
We are using the Gigabit CT Desktop Adapter on 2 identical Servers with the operating system Windows Server 2012 R2, Version 6.3, (Build 9600).
On both servers, the Gigabit CT Desktop Adapter sometimes suddenly stops working.
The event viewer shows this message:
Miniport Intel(R) Gigabit CT Desktop Adapter, {5bc59bfb-de9f-43cd-8291-5b76da5ea58c}, had event Fatal error: The miniport has detected an internal error
And the device manager a yellow triangle with exclamation point and errorcode 43.
Whe have changed the adapter on both machines and on one machine, whe have changed the mainboard. But nevertheless the error occurs again and again.
If we restart the machine, the adapter is working again for some time.
We are using a FUJITUS D3401-H1 mainboard:
System Manufacturer FUJITSU
System Model D3401-H1
System Type x64-based PC
System SKU S26361-Kxxx-Vyyy
Processor Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz, 3401 Mhz, 4 Core(s), 8 Logical Processor(s)
BIOS Version/Date FUJITSU // American Megatrends Inc. V5.0.0.11 R1.14.0 for D3401-H1x, 6/9/2016
SMBIOS Version 3.0
Embedded Controller Version 255.255
BIOS Mode Legacy
BaseBoard Manufacturer FUJITSU
- Have the adapter incompatibilites with this mainboard?
- is the operating system supported for this adapter "Gigabit CT Desktop Adapter"?
- what is the reason for the errors and how they can be stopped?
I have also tried to uninstall and reinstall the adapter drivers.
I have downloaded the intel drivers and also the intel driver update utility, which searchs for intel hardware on the computer and updates the drivers. But the driver update utility do not find this hardware.
I have downloaded the suitable driver package "PROWinx64.exe" and have it installed. It has no effect.
The installed driver is:
C:\Windows\system32\DRIVERS\e1i63x64.sys
Provider: Intel
Version: 12.6.47.0
If I uninstall, this driver is installed again. It works for some time.
What is the reason for the error and what can we do to avoid it?
How can I get VLAN working in Windows 10?
It's Windows 10 2016 LTSB 1607, build 14393 using PRO Set version 22.4
The VLAN configuration tab tells me to update Windows 10, I'm using the version specified here: https://www.intel.nl/content/www/nl/nl/support/network-and-i-o/000022282.html
This is for embedded devices that won't have an internet connection, so if I need a Windows update I'd like to know which update(s).
SR-IOV with IXGBE - Vlan packets getting spoofed
Hi All,
I am using RHEL7.3 with Intel-82599ES nic cards to launch VMs with SRIOV enabled nic cards. I am using configuring only one VF per PF. I am configuring this VF with vlan, trust mode on and disabling spoof chk.
But, when I am sending vlan tagged packets from Guest VM, I can see the "spoofed packet detected" message in dmesg for this PF card.
We have also disabled the rx/tx vlan offload using ethtool command.
Here are setup details:
Kernel version
# uname -r
3.10.0-514.el7.x86_64
PF/VF configuration:
# ip link show eth2
4: eth2: <BROADCAST,MULTICAST,ALLMULTI,PROMISC,UP,LOWER_UP> mtu 9192 qdisc mq state UP mode DEFAULT qlen 1000
link/ether 90:e2:ba:a5:98:7c brd ff:ff:ff:ff:ff:ff
vf 0 MAC fa:16:3e:73:12:6c, vlan 1500, spoof checking off, link-state auto, trust on
IXGBE version
# ethtool -i eth2
driver: ixgbe
version: 4.4.0-k-rh7.3
firmware-version: 0x61bd0001
expansion-rom-version:
bus-info: 0000:81:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: no
Messages from dmesg
[441100.018278] ixgbe 0000:81:00.0 eth2: 3 Spoofed packets detected
[441102.022383] ixgbe 0000:81:00.0 eth2: 2 Spoofed packets detected
[441104.026460] ixgbe 0000:81:00.0 eth2: 3 Spoofed packets detected
[441106.030516] ixgbe 0000:81:00.0 eth2: 2 Spoofed packets detected
LSPCI output
# lspci -nn | grep Ether | grep 82599
81:00.0 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)
81:00.1 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)
81:10.0 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)
Ethtool -k output
# ethtool -k eth2 | grep vlan
rx-vlan-offload: off
tx-vlan-offload: off
rx-vlan-filter: on
vlan-challenged: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
Please let me know, if you any need any other information.
Regards
Pratik
need firmware for intel 82850 in my oracle linux 5.9
hi,
we have 2 intel 82850 in my Oracle Linux 5.9. but 1 nic's firmware version is 3.0 and another nic's firmware version is 3.9. we want to upgrade firmware with same version, then bonding them.
my question is : where can i download the firmware of Intel 82850 which version is 3.9??
[root@sfsdb5 ~]# lspc |grep net
-bash: lspc: command not found
[root@sfsdb5 ~]# lspci |grep net
04:00.0 Ethernet controller: NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter (rev 42)
04:00.1 Ethernet controller: NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter (rev 42)
04:00.2 Ethernet controller: NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter (rev 42)
04:00.3 Ethernet controller: NetXen Incorporated NX3031 Multifunction 1/10-Gigabit Server Adapter (rev 42)
05:00.0 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)
05:00.1 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)
05:00.2 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)
05:00.3 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)
08:00.0 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)
08:00.1 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)
08:00.2 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)
08:00.3 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)
Difference in DPDK and Native IXGBE driver support for 82599 NIC
Hello All,
We have been trying to make Unicast promiscuous mode work with RHEL7.3 with latest native ixgbe driver (ixgbe-5.1.3), but it seems that unicast promiscuous mode is not enabled for 82599 series nic cards in the native driver.
I can see an explicit check in ixgbe_sriov.c code, where before enabling promiscuous mode, it checks if NIC card is equal(or lower) than 82599EB, it returns.
Adding snippet below:
case IXGBEVF_XCAST_MODE_PROMISC:
if (hw->mac.type <= ixgbe_mac_82599EB)
return -EOPNOTSUPP;
fctrl = IXGBE_READ_REG(hw, IXGBE_FCTRL);
if (!(fctrl & IXGBE_FCTRL_UPE)) {
/* VF promisc requires PF in promisc */
e_warn(drv,
"Enabling VF promisc requires PF in promisc\n");
return -EPERM;
}
disable = 0;
enable = IXGBE_VMOLR_BAM | IXGBE_VMOLR_ROMPE |
IXGBE_VMOLR_MPE | IXGBE_VMOLR_UPE | IXGBE_VMOLR_VPE;
break;
But, when I see the corresponding code in DPDK16.11 version, I can see the support has been added for 82599 NICs family. The feature seems to have implemented using IXGBE_VMOLR_ROPE flag.
Relevant snippet from DPDK code:
uint32_t
ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
{
uint32_t new_val = orig_val;
if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
new_val |= IXGBE_VMOLR_AUPE;
if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
new_val |= IXGBE_VMOLR_ROMPE;
if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
new_val |= IXGBE_VMOLR_ROPE;
if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
new_val |= IXGBE_VMOLR_BAM;
if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
new_val |= IXGBE_VMOLR_MPE;
return new_val;
}
So, can you please let us know, why such difference between supported NIC ? and can we also have similar functionality ported to the native ixgbe driver?
Other setup details
Kernel version
# uname -r
3.10.0-514.el7.x86_64
LSPCI output
# lspci -nn | grep Ether | grep 82599
81:00.0 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)
81:00.1 Ethernet controller [0200]: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection [8086:10fb] (rev 01)
81:10.0 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)
# ethtool -i eth2
driver: ixgbe
version: 5.1.3
firmware-version: 0x61bd0001
expansion-rom-version:
bus-info: 0000:81:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
Regards
Pratik
Issue with setting smp_affinity on ixgbe cards
Hi,
I am using a Dell PowerEdge R730 with Dual Xeon, each 22 cores, with 6 ixgbe compatible cards, on which I am running Linux with ixgbe driver version 4.4.0-k, using kernel versions both 4.7.10 and 4.9.6.
I am loading the ixgbe modules at boot time, bringing up the interfaces and setting smp_affinity to the cards, using the set_irq_affinity
script, so all the possible RxTx IRQs are distributed between all the available cores.
The problem is that it happens, random, but quite often that the smp_affinity setting fails, and I need manually to re-run the script one or more times in order desired settings to be applied. There were also several occasions when the settings were not applied at all, and it took me several reboots to script to start working again.
The problem appears not only randomly as occurrence, but also at random NIC controllers, so I am excluding the possibility of failed HW, since I also changed NICs.
I added some debug messages to track the affinity setting in Linux kernel, and it turns out that most of the times when the setting fails the error that affinity setting function irq_do_set_affinity
returns is EBUSY
, but also sometimes it returns ENOSPC
.
More investigation on the topic showed whenever EBUSY
was returned the problem could be overcome with re-running the script. But if the error returned was ENOSPC
, it takes several reboots for the problem to disappear.
In order to provide some more details on the system I am attaching two text files with the output of the modinfo of the ixgbe and lspci on the machine.
i40e Ethernet Connection XL710 Network Driver - version 1.5.10-k 2.6.32-696 not loading correctly
Has anyone ran into a similar issue. After a yum update to kernel 2.6.32-696.3.2.el6.x86_64 or 2.6.32-696. My bond2 interface stops working correctly.
After that I am unable to set speed settings and unable to ping anything. This is causing my NFS shares to stop working, as they are being mounted via that nic.
When I roll back to 2.6.32-642.13.1.el6.x86_64 starts working right away.
from dmesg it looks like the kernel is unable to detect that we are using 10Gbps cards. How do I proceed with reporting this bug?
======================================================
2.6.32-696.3.2.el6.x86_64
======================================================
# modinfo i40e
filename: /lib/modules/2.6.32-696.3.2.el6.x86_64/kernel/drivers/net/i40e/i40e.ko
version: 1.5.10-k
license: GPL
description: Intel(R) Ethernet Connection XL710 Network Driver
author: Intel Corporation, <e1000-devel@lists.sourceforge.net>
srcversion: B5DC8E286FEFB9414076D56
alias: pci:v00008086d00001588sv*sd*bc*sc*i*
alias: pci:v00008086d00001587sv*sd*bc*sc*i*
alias: pci:v00008086d000037D4sv*sd*bc*sc*i*
alias: pci:v00008086d000037D3sv*sd*bc*sc*i*
alias: pci:v00008086d000037D2sv*sd*bc*sc*i*
alias: pci:v00008086d000037D1sv*sd*bc*sc*i*
alias: pci:v00008086d000037D0sv*sd*bc*sc*i*
alias: pci:v00008086d000037CFsv*sd*bc*sc*i*
alias: pci:v00008086d000037CEsv*sd*bc*sc*i*
alias: pci:v00008086d00001587sv*sd*bc*sc*i*
alias: pci:v00008086d00001589sv*sd*bc*sc*i*
alias: pci:v00008086d00001586sv*sd*bc*sc*i*
alias: pci:v00008086d00001585sv*sd*bc*sc*i*
alias: pci:v00008086d00001584sv*sd*bc*sc*i*
alias: pci:v00008086d00001583sv*sd*bc*sc*i*
alias: pci:v00008086d00001581sv*sd*bc*sc*i*
alias: pci:v00008086d00001580sv*sd*bc*sc*i*
alias: pci:v00008086d00001574sv*sd*bc*sc*i*
alias: pci:v00008086d00001572sv*sd*bc*sc*i*
depends: ptp
vermagic: 2.6.32-696.3.2.el6.x86_64 SMP mod_unload modversions
parm: debug:Debug level (0=none,...,16=all) (int)
# grep i40e /tmp/dmesg-2.6.32-696.3.2.el6.x86_64
i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 1.5.10-k
i40e: Copyright (c) 2013 - 2014 Intel Corporation.
i40e 0000:0b:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
i40e 0000:0b:00.0: setting latency timer to 64
i40e 0000:0b:00.0: fw 4.50.37442 api 1.4 nvm 4.60 0x80001f47 1.3072.0
i40e 0000:0b:00.0: MAC address: <REDACTED>
i40e 0000:0b:00.0: irq 85 for MSI/MSI-X
i40e 0000:0b:00.0: irq 86 for MSI/MSI-X
i40e 0000:0b:00.0: irq 87 for MSI/MSI-X
i40e 0000:0b:00.0: irq 88 for MSI/MSI-X
i40e 0000:0b:00.0: irq 89 for MSI/MSI-X
i40e 0000:0b:00.0: irq 90 for MSI/MSI-X
i40e 0000:0b:00.0: irq 91 for MSI/MSI-X
i40e 0000:0b:00.0: irq 92 for MSI/MSI-X
i40e 0000:0b:00.0: irq 93 for MSI/MSI-X
i40e 0000:0b:00.0: irq 94 for MSI/MSI-X
i40e 0000:0b:00.0: irq 95 for MSI/MSI-X
i40e 0000:0b:00.0: irq 96 for MSI/MSI-X
i40e 0000:0b:00.0: irq 97 for MSI/MSI-X
i40e 0000:0b:00.0: irq 98 for MSI/MSI-X
i40e 0000:0b:00.0: irq 99 for MSI/MSI-X
i40e 0000:0b:00.0: irq 100 for MSI/MSI-X
i40e 0000:0b:00.0: irq 101 for MSI/MSI-X
i40e 0000:0b:00.0: irq 102 for MSI/MSI-X
i40e 0000:0b:00.0: irq 103 for MSI/MSI-X
i40e 0000:0b:00.0: irq 104 for MSI/MSI-X
i40e 0000:0b:00.0: irq 105 for MSI/MSI-X
i40e 0000:0b:00.0: irq 106 for MSI/MSI-X
i40e 0000:0b:00.0: irq 107 for MSI/MSI-X
i40e 0000:0b:00.0: irq 108 for MSI/MSI-X
i40e 0000:0b:00.0: irq 109 for MSI/MSI-X
i40e 0000:0b:00.0: irq 110 for MSI/MSI-X
i40e 0000:0b:00.0: PCI-Express: Speed 8.0GT/s Width x8
i40e 0000:0b:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 16 RX: 1BUF RSS FD_ATR FD_SB NTUPLE VxLAN PTP VEPA
i40e 0000:0b:00.1: PCI INT A -> GSI 16 (level, low) -> IRQ 16
i40e 0000:0b:00.1: setting latency timer to 64
i40e 0000:0b:00.1: fw 4.50.37442 api 1.4 nvm 4.60 0x80001f47 1.3072.0
i40e 0000:0b:00.1: MAC address: <REDACTED>
i40e 0000:0b:00.1: irq 111 for MSI/MSI-X
i40e 0000:0b:00.1: irq 112 for MSI/MSI-X
i40e 0000:0b:00.1: irq 113 for MSI/MSI-X
i40e 0000:0b:00.1: irq 114 for MSI/MSI-X
i40e 0000:0b:00.1: irq 115 for MSI/MSI-X
i40e 0000:0b:00.1: irq 116 for MSI/MSI-X
i40e 0000:0b:00.1: irq 117 for MSI/MSI-X
i40e 0000:0b:00.1: irq 118 for MSI/MSI-X
i40e 0000:0b:00.1: irq 119 for MSI/MSI-X
i40e 0000:0b:00.1: irq 120 for MSI/MSI-X
i40e 0000:0b:00.1: irq 121 for MSI/MSI-X
i40e 0000:0b:00.1: irq 122 for MSI/MSI-X
i40e 0000:0b:00.1: irq 123 for MSI/MSI-X
i40e 0000:0b:00.1: irq 124 for MSI/MSI-X
i40e 0000:0b:00.1: irq 125 for MSI/MSI-X
i40e 0000:0b:00.1: irq 126 for MSI/MSI-X
i40e 0000:0b:00.1: irq 127 for MSI/MSI-X
i40e 0000:0b:00.1: irq 128 for MSI/MSI-X
i40e 0000:0b:00.1: irq 129 for MSI/MSI-X
i40e 0000:0b:00.1: irq 130 for MSI/MSI-X
i40e 0000:0b:00.1: irq 131 for MSI/MSI-X
i40e 0000:0b:00.1: irq 132 for MSI/MSI-X
i40e 0000:0b:00.1: irq 133 for MSI/MSI-X
i40e 0000:0b:00.1: irq 134 for MSI/MSI-X
i40e 0000:0b:00.1: irq 135 for MSI/MSI-X
i40e 0000:0b:00.1: irq 136 for MSI/MSI-X
i40e 0000:0b:00.1: PCI-Express: Speed 8.0GT/s Width x8
i40e 0000:0b:00.1: Features: PF-id[1] VFs: 64 VSIs: 66 QP: 16 RX: 1BUF RSS FD_ATR FD_SB NTUPLE VxLAN PTP VEPA
i40e 0000:0b:00.0: eth8: already using mac address <REDACTED>
i40e 0000:0b:00.1: eth9: set new mac address <REDACTED>
# ethtool -i bond2
driver: bonding
version: 3.7.1
firmware-version: 2
bus-info:
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
# cat /proc/net/bonding/bond2
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: None
MII Status: down
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth9
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: <REDACTED>
Slave queue ID: 0
Slave Interface: eth8
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: <REDACTED>
Slave queue ID: 0
ethtool -s bond2 speed 10000 duplex full autoneg off
Cannot set new settings: Operation not supported
not setting speed
not setting duplex
not setting autoneg
======================================================
2.6.32-642.13.1.el6.x86_64
======================================================
# modinfo i40e
filename: /lib/modules/2.6.32-642.13.1.el6.x86_64/kernel/drivers/net/i40e/i40e.ko
version: 1.4.7-k
license: GPL
description: Intel(R) Ethernet Connection XL710 Network Driver
author: Intel Corporation, <e1000-devel@lists.sourceforge.net>
srcversion: B91F227B49241127F18771D
alias: pci:v00008086d00001588sv*sd*bc*sc*i*
alias: pci:v00008086d00001587sv*sd*bc*sc*i*
alias: pci:v00008086d000037D2sv*sd*bc*sc*i*
alias: pci:v00008086d000037D1sv*sd*bc*sc*i*
alias: pci:v00008086d000037D0sv*sd*bc*sc*i*
alias: pci:v00008086d00001587sv*sd*bc*sc*i*
alias: pci:v00008086d00001589sv*sd*bc*sc*i*
alias: pci:v00008086d00001586sv*sd*bc*sc*i*
alias: pci:v00008086d00001585sv*sd*bc*sc*i*
alias: pci:v00008086d00001584sv*sd*bc*sc*i*
alias: pci:v00008086d00001583sv*sd*bc*sc*i*
alias: pci:v00008086d00001581sv*sd*bc*sc*i*
alias: pci:v00008086d00001580sv*sd*bc*sc*i*
alias: pci:v00008086d0000157Fsv*sd*bc*sc*i*
alias: pci:v00008086d00001574sv*sd*bc*sc*i*
alias: pci:v00008086d00001572sv*sd*bc*sc*i*
depends: ptp
vermagic: 2.6.32-642.13.1.el6.x86_64 SMP mod_unload modversions
parm: debug:Debug level (0=none,...,16=all) (int)
grep i40e /tmp/dmesg-2.6.32-642.13.1.el6.x86_64
i40e: Intel(R) Ethernet Connection XL710 Network Driver - version 1.4.7-k
i40e: Copyright (c) 2013 - 2014 Intel Corporation.
i40e 0000:0b:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
i40e 0000:0b:00.0: setting latency timer to 64
i40e 0000:0b:00.0: fw 4.50.37442 api 1.4 nvm 4.60 0x80001f47 1.3072.0
i40e 0000:0b:00.0: MAC address: <REDACTED>
i40e 0000:0b:00.0: irq 85 for MSI/MSI-X
i40e 0000:0b:00.0: irq 86 for MSI/MSI-X
i40e 0000:0b:00.0: irq 87 for MSI/MSI-X
i40e 0000:0b:00.0: irq 88 for MSI/MSI-X
i40e 0000:0b:00.0: irq 89 for MSI/MSI-X
i40e 0000:0b:00.0: irq 90 for MSI/MSI-X
i40e 0000:0b:00.0: irq 91 for MSI/MSI-X
i40e 0000:0b:00.0: irq 92 for MSI/MSI-X
i40e 0000:0b:00.0: irq 93 for MSI/MSI-X
i40e 0000:0b:00.0: irq 94 for MSI/MSI-X
i40e 0000:0b:00.0: irq 95 for MSI/MSI-X
i40e 0000:0b:00.0: irq 96 for MSI/MSI-X
i40e 0000:0b:00.0: irq 97 for MSI/MSI-X
i40e 0000:0b:00.0: irq 98 for MSI/MSI-X
i40e 0000:0b:00.0: irq 99 for MSI/MSI-X
i40e 0000:0b:00.0: irq 100 for MSI/MSI-X
i40e 0000:0b:00.0: irq 101 for MSI/MSI-X
i40e 0000:0b:00.0: irq 102 for MSI/MSI-X
i40e 0000:0b:00.0: irq 103 for MSI/MSI-X
i40e 0000:0b:00.0: irq 104 for MSI/MSI-X
i40e 0000:0b:00.0: irq 105 for MSI/MSI-X
i40e 0000:0b:00.0: irq 106 for MSI/MSI-X
i40e 0000:0b:00.0: irq 107 for MSI/MSI-X
i40e 0000:0b:00.0: irq 108 for MSI/MSI-X
i40e 0000:0b:00.0: irq 109 for MSI/MSI-X
i40e 0000:0b:00.0: irq 110 for MSI/MSI-X
i40e 0000:0b:00.0: PCI-Express: Speed 8.0GT/s Width x8
i40e 0000:0b:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 16 RX: 1BUF RSS FD_ATR FD_SB NTUPLE VxLAN PTP VEPA
i40e 0000:0b:00.1: PCI INT A -> GSI 16 (level, low) -> IRQ 16
i40e 0000:0b:00.1: setting latency timer to 64
i40e 0000:0b:00.1: fw 4.50.37442 api 1.4 nvm 4.60 0x80001f47 1.3072.0
i40e 0000:0b:00.1: MAC address: <REDACTED>
i40e 0000:0b:00.1: irq 111 for MSI/MSI-X
i40e 0000:0b:00.1: irq 112 for MSI/MSI-X
i40e 0000:0b:00.1: irq 113 for MSI/MSI-X
i40e 0000:0b:00.1: irq 114 for MSI/MSI-X
i40e 0000:0b:00.1: irq 115 for MSI/MSI-X
i40e 0000:0b:00.1: irq 116 for MSI/MSI-X
i40e 0000:0b:00.1: irq 117 for MSI/MSI-X
i40e 0000:0b:00.1: irq 118 for MSI/MSI-X
i40e 0000:0b:00.1: irq 119 for MSI/MSI-X
i40e 0000:0b:00.1: irq 120 for MSI/MSI-X
i40e 0000:0b:00.1: irq 121 for MSI/MSI-X
i40e 0000:0b:00.1: irq 122 for MSI/MSI-X
i40e 0000:0b:00.1: irq 123 for MSI/MSI-X
i40e 0000:0b:00.1: irq 124 for MSI/MSI-X
i40e 0000:0b:00.1: irq 125 for MSI/MSI-X
i40e 0000:0b:00.1: irq 126 for MSI/MSI-X
i40e 0000:0b:00.1: irq 127 for MSI/MSI-X
i40e 0000:0b:00.1: irq 128 for MSI/MSI-X
i40e 0000:0b:00.1: irq 129 for MSI/MSI-X
i40e 0000:0b:00.1: irq 130 for MSI/MSI-X
i40e 0000:0b:00.1: irq 131 for MSI/MSI-X
i40e 0000:0b:00.1: irq 132 for MSI/MSI-X
i40e 0000:0b:00.1: irq 133 for MSI/MSI-X
i40e 0000:0b:00.1: irq 134 for MSI/MSI-X
i40e 0000:0b:00.1: irq 135 for MSI/MSI-X
i40e 0000:0b:00.1: irq 136 for MSI/MSI-X
i40e 0000:0b:00.1: PCI-Express: Speed 8.0GT/s Width x8
i40e 0000:0b:00.1: Features: PF-id[1] VFs: 64 VSIs: 66 QP: 16 RX: 1BUF RSS FD_ATR FD_SB NTUPLE VxLAN PTP VEPA
i40e 0000:0b:00.0: eth8: already using mac address <REDACTED>
i40e 0000:0b:00.0: eth8: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None
i40e 0000:0b:00.1: eth9: set new mac address <REDACTED>
i40e 0000:0b:00.1: eth9: NIC Link is Up 10 Gbps Full Duplex, Flow Control: None
# ethtool -i bond2
driver: bonding
version: 3.7.1
firmware-version: 2
bus-info:
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
# cat /proc/net/bonding/bond2
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth8
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth8
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: <REDACTED>
Slave queue ID: 0
Slave Interface: eth9
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: <REDACTED>
Slave queue ID: 0
Issue with X710 and XL710 on Dell PowerEdge server + RedHat 7.2
Hi Intel comunity,
We have serious problem with intel cards (X710 and XL170).
The linux server (Dell Power Edge 630) don’t see them completely under RedHat 7.2. I dont see the interfaces under linux ifconfig -a.
I have installed the last drivers (ixgbe-5.1.3 and i40e-2.0.26) but didnt succedded to update the firmware (if it is the problem).
Here below the output of my server:
- From "modinfo" I get the following output:
[root@TBOS ~]# modinfo i40e
filename: /lib/modules/3.10.0-327.el7.x86_64/updates/drivers/net/ethernet/intel/i40e/i40e.ko
version: 2.0.26
license: GPL
description: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver
author: Intel Corporation, <e1000-devel@lists.sourceforge.net>
rhelversion: 7.2
srcversion: F49696A466EC36F89F8FE86
alias: pci:v00008086d0000158Bsv*sd*bc*sc*i*
alias: pci:v00008086d0000158Asv*sd*bc*sc*i*
alias: pci:v00008086d000037D3sv*sd*bc*sc*i*
alias: pci:v00008086d000037D2sv*sd*bc*sc*i*
alias: pci:v00008086d000037D1sv*sd*bc*sc*i*
alias: pci:v00008086d000037D0sv*sd*bc*sc*i*
alias: pci:v00008086d000037CFsv*sd*bc*sc*i*
alias: pci:v00008086d000037CEsv*sd*bc*sc*i*
alias: pci:v00008086d0000374Csv*sd*bc*sc*i*
alias: pci:v00008086d00001588sv*sd*bc*sc*i*
alias: pci:v00008086d00001587sv*sd*bc*sc*i*
alias: pci:v00008086d00001589sv*sd*bc*sc*i*
alias: pci:v00008086d00001586sv*sd*bc*sc*i*
alias: pci:v00008086d00001585sv*sd*bc*sc*i*
alias: pci:v00008086d00001584sv*sd*bc*sc*i*
alias: pci:v00008086d00001583sv*sd*bc*sc*i*
alias: pci:v00008086d00001581sv*sd*bc*sc*i*
alias: pci:v00008086d00001580sv*sd*bc*sc*i*
alias: pci:v00008086d00001574sv*sd*bc*sc*i*
alias: pci:v00008086d00001572sv*sd*bc*sc*i*
depends: ptp,vxlan
vermagic: 3.10.0-327.el7.x86_64 SMP mod_unload modversions
parm: debug:Debug level (0=none,...,16=all) (int)
[root@TBOS ~]#
- From ifconfig –a, we don’t see the port at all !!!
- When I want to update the firmware, I get
- [root@TBOS Linux_x64]# ./nvmupdate64e
Intel(R) Ethernet NVM Update Tool
NVMUpdate version 1.28.19.4
Copyright (C) 2013 - 2016 Intel Corporation.
WARNING: To avoid damage to your device, do not stop the update or reboot or power off the system during this update.
Inventory in progress. Please wait [*|........]
Num Description Ver. DevId S:B Status
=== ======================================== ===== ===== ====== ===============
01) Intel(R) Ethernet Converged Network 1572 00:004 Access error
Adapter X710
Tool execution completed with the following status: Device not found
Press any key to exit.
- Dmesg output:
[root@TBOS ~]# dmesg| grep i40
[ 3.549403] i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 2.0.26
[ 3.549405] i40e: Copyright(c) 2013 - 2017 Intel Corporation.
[ 3.578751] i40e 0000:04:00.0: fw 4.33.31377 api 1.2 nvm 4.41 0x80001869 16.5.10
[ 3.578754] i40e 0000:04:00.0: The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.
[ 3.822134] i40e 0000:04:00.0: MAC address: 3c:fd:fe:0c:cb:e0
[ 3.835115] i40e 0000:04:00.0: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM
[ 3.835118] i40e 0000:04:00.0: DCB init failed -53, disabled
[ 3.835166] i40e 0000:04:00.0: irq 91 for MSI/MSI-X
…..
[ 3.836118] i40e 0000:04:00.0: irq 148 for MSI/MSI-X
[ 4.050907] i40e 0000:04:00.0: Added LAN device PF0 bus=0x04 dev=0x00 func=0x00
[ 4.050912] i40e 0000:04:00.0: PCI-Express: Speed 8.0GT/s Width x8
[ 4.080877] i40e 0000:04:00.0: Features: PF-id[0] VFs: 32 VSIs: 34 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF VxLAN Geneve NVGRE PTP VEPA
[ 4.094861] i40e 0000:04:00.1: fw 4.33.31377 api 1.2 nvm 4.41 0x80001869 16.5.10
[ 4.094864] i40e 0000:04:00.1: The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.
[ 4.339689] i40e 0000:04:00.1: MAC address: 3c:fd:fe:0c:cb:e2
[ 4.349612] i40e 0000:04:00.1: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM
[ 4.349615] i40e 0000:04:00.1: DCB init failed -53, disabled
[ 4.349684] i40e 0000:04:00.1: irq 150 for MSI/MSI-X
……
[ 4.350710] i40e 0000:04:00.1: irq 207 for MSI/MSI-X
[ 4.499464] i40e 0000:04:00.1: Added LAN device PF1 bus=0x04 dev=0x00 func=0x01
[ 4.499469] i40e 0000:04:00.1: PCI-Express: Speed 8.0GT/s Width x8
[ 4.529440] i40e 0000:04:00.1: Features: PF-id[1] VFs: 32 VSIs: 34 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF VxLAN Geneve NVGRE PTP VEPA
[ 4.543425] i40e 0000:04:00.2: fw 4.33.31377 api 1.2 nvm 4.41 0x80001869 16.5.10
[ 4.543427] i40e 0000:04:00.2: The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.
[ 4.989984] i40e 0000:04:00.2: MAC address: 3c:fd:fe:0c:cb:e4
[ 5.000011] i40e 0000:04:00.2: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM
[ 5.000014] i40e 0000:04:00.2: DCB init failed -53, disabled
[ 5.000085] i40e 0000:04:00.2: irq 208 for MSI/MSI-X
…..
[ 5.001379] i40e 0000:04:00.2: irq 265 for MSI/MSI-X
[ 5.228749] i40e 0000:04:00.2: Added LAN device PF2 bus=0x04 dev=0x00 func=0x02
[ 5.228754] i40e 0000:04:00.2: PCI-Express: Speed 8.0GT/s Width x8
[ 5.263716] i40e 0000:04:00.2: Features: PF-id[2] VFs: 32 VSIs: 34 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF VxLAN Geneve NVGRE PTP VEPA
[ 5.277703] i40e 0000:04:00.3: fw 4.33.31377 api 1.2 nvm 4.41 0x80001869 16.5.10
[ 5.277705] i40e 0000:04:00.3: The driver for the device detected an older version of the NVM image than expected. Please update the NVM image.
[ 5.566417] i40e 0000:04:00.3: MAC address: 3c:fd:fe:0c:cb:e6
[ 5.576408] i40e 0000:04:00.3: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM
[ 5.576411] i40e 0000:04:00.3: DCB init failed -53, disabled
[ 5.576487] i40e 0000:04:00.3: irq 266 for MSI/MSI-X
……
[ 5.577839] i40e 0000:04:00.3: irq 323 for MSI/MSI-X
[ 5.725374] i40e 0000:04:00.3: Added LAN device PF3 bus=0x04 dev=0x00 func=0x03
[ 5.725383] i40e 0000:04:00.3: PCI-Express: Speed 8.0GT/s Width x8
[ 5.755339] i40e 0000:04:00.3: Features: PF-id[3] VFs: 32 VSIs: 34 QP: 48 RSS FD_ATR FD_SB NTUPLE CloudF VxLAN Geneve NVGRE PTP VEPA
[ 67.004064] i40e 0000:04:00.0: removed PHC from p2p1
[ 67.040316] i40e 0000:04:00.0: Deleted LAN device PF0 bus=0x04 dev=0x00 func=0x00
[ 70.039865] i40e 0000:04:00.1: removed PHC from p2p2
[ 70.070096] i40e 0000:04:00.1: Deleted LAN device PF1 bus=0x04 dev=0x00 func=0x01
[ 73.240777] i40e 0000:04:00.2: removed PHC from p2p3
[ 73.279308] i40e 0000:04:00.2: Deleted LAN device PF2 bus=0x04 dev=0x00 func=0x02
[ 74.650315] i40e 0000:04:00.3: removed PHC from p2p4
[ 74.690215] i40e 0000:04:00.3: Deleted LAN device PF3 bus=0x04 dev=0x00 func=0x03
[root@TBOS ~]#
- From lspci| egrep -i "Network|Ethernet" we see the following output:
- 04:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)
- 04:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)
- 04:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)
- 04:00.3 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)
Please help ...
Thanks
X710-4 NVM Tool Reports "Update not found"
Hi, I have several X710-DA4 that I purchased at different times, and some of them I was able to grab the latest firmware (5.05) and upgrade them. nvmupdate64e and ethool show this on the good ones:
driver: i40e
version: 1.6.42
firmware-version: 5.05 0x8000289d 1.1568.0
bus-info: 0000:85:00.2
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
WARNING: To avoid damage to your device, do not stop the update or reboot or power off the system during this update.
Inventory in progress. Please wait [.........*]
Num Description Ver. DevId S:B Status
=== ======================================== ===== ===== ====== ===============
01) Intel(R) Ethernet Converged Network 5.05 1572 00:004 Up to date
Adapter X710-4
02) Intel(R) I350 Gigabit Network Connection 1.99 1521 00:129 Update not
available
03) Intel(R) Ethernet Converged Network 5.05 1572 00:133 Up to date
Adapter X710-4
On the other box, it will not let me upgrade:
driver: i40e
version: 2.0.23
firmware-version: 4.10 0x800011c5 0.0.0
bus-info: 0000:01:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
WARNING: To avoid damage to your device, do not stop the update or reboot or power off the system during this update.
Inventory in progress. Please wait [|.........]
Num Description Ver. DevId S:B Status
=== ======================================== ===== ===== ====== ===============
01) Intel(R) Ethernet Converged Network 4.10 1572 00:001 Update not
Adapter X710-4 available
02) Intel(R) I350 Gigabit Network Connection 1.99 1521 00:129 Update not
available
03) Intel(R) Ethernet Converged Network 4.10 1572 00:130 Update not
Adapter X710-4 available
Does anyone know what's wrong?
Gigabit 82567V-2 No Wake on Lan Options
I'm trying to configure Wake on Lan on my Windows 10 Pro 64bit HP Pavilion Elite HPE. I've configured my "lesser" machines just fine, but this one has the 82567V-2 gigabit card which doesn't show any wake on magic packet options in advanced properties. I've tried many different options but the card still won't stay awake when powered down. I have unchecked allow the computer to power it down, enabled wake on lan in bios, disabled fast boot and hibernate. Driver is up to date from windows update. I also searched for a specific driver on intel's website but there isn't one for Windows 10 64 bit. Your help is appreciated. Thank you.
IES api install problem & HNI Driver
Hi all,
I am setting up the test environment for FM10000. I downloaded IES( Intel Ethernet Switch Software ) api and tried to install it in Unbuntu 16.04 LTS. I followed the guideline in some Documents .Generally, what I do is cd to ies/src and type command
" sudo make install PLATFORM=rubyRapids REF_PLATFORM=libertyTraili INSTALL_DIRECTORY=/home/brayn/Documents".
I go back and check my folder( /home/brayn/Documents ) , there is nothing installed in it so I assume I failed ? Below is the message shows on terminal.
brayn@brayn-Ultra-27:~/Documents/ies/src$ sudo make install PLATFORM=rubyRapids REF_PLATFORM=libertyTraili INSTALL_DIRECTORY=/home/brayn/Documents
make[1]: Entering directory '/home/brayn/Documents/ies/src'
/bin/mkdir -p '/usr/local/lib'
/bin/bash ../libtool --mode=install /usr/bin/install -c libFocalpointSDK.la libLTStdPlatform.la '/usr/local/lib'
libtool: install: /usr/bin/install -c .libs/libFocalpointSDK-4.1.3_0378_00314560.so /usr/local/lib/libFocalpointSDK-4.1.3_0378_00314560.so
libtool: install: (cd /usr/local/lib && { ln -s -f libFocalpointSDK-4.1.3_0378_00314560.so libFocalpointSDK.so || { rm -f libFocalpointSDK.so && ln -s libFocalpointSDK-4.1.3_0378_00314560.so libFocalpointSDK.so; }; })
libtool: install: /usr/bin/install -c .libs/libFocalpointSDK.lai /usr/local/lib/libFocalpointSDK.la
libtool: install: /usr/bin/install -c .libs/libLTStdPlatform-4.1.3_0378_00314560.so /usr/local/lib/libLTStdPlatform-4.1.3_0378_00314560.so
libtool: install: (cd /usr/local/lib && { ln -s -f libLTStdPlatform-4.1.3_0378_00314560.so libLTStdPlatform.so || { rm -f libLTStdPlatform.so && ln -s libLTStdPlatform-4.1.3_0378_00314560.so libLTStdPlatform.so; }; })
libtool: install: /usr/bin/install -c .libs/libLTStdPlatform.lai /usr/local/lib/libLTStdPlatform.la
libtool: install: /usr/bin/install -c .libs/libFocalpointSDK.a /usr/local/lib/libFocalpointSDK.a
libtool: install: chmod 644 /usr/local/lib/libFocalpointSDK.a
libtool: install: ranlib /usr/local/lib/libFocalpointSDK.a
libtool: install: /usr/bin/install -c .libs/libLTStdPlatform.a /usr/local/lib/libLTStdPlatform.a
libtool: install: chmod 644 /usr/local/lib/libLTStdPlatform.a
libtool: install: ranlib /usr/local/lib/libLTStdPlatform.a
libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/sbin" ldconfig -n /usr/local/lib
----------------------------------------------------------------------
Libraries have been installed in:
/usr/local/lib
If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to `/etc/ld.so.conf'
See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
make[1]: Nothing to be done for 'install-data-am'.
make[1]: Leaving directory '/home/brayn/Documents/ies/src'
Can anyone have experience help solve this problem?
Also I can't find HNI ( Host Network Interface) driver mentioned in the guideline, Where should I download it ? Thanks!