The link to latest is 5.05, looking for 5.51 recommended for XVV710.
Where to download NVM update 5.51 for XXV710 ?
Re: VLAN adapter not working after reboot - Win10, 22.0.1
Hi all,
Same problem here.
One tagged VLAN (6) and untagged VLAN. After every reboot the untagged VLAN doesn't work until I disable and re-enable the Untagged VLAN virtual adapter. The VLAN6 doesn't present this problem and can ping other hosts in the same VLAN.
NIC is connected to a Cisco switch with the following config:
nwcore2#sh run | s 2/0/39
interface GigabitEthernet2/0/39
description AURELIO (0B39D)
switchport trunk allowed vlan 1,6
switchport mode trunk
spanning-tree portfast trunk
spanning-tree bpduguard enable
Let me know if you need any information to try to detect the issue. As ziesemer I have no dump as the system doesn't crash.
Best regards
Aurelio Llorente
IEEE 1588 hardware implementation.
I am designing a system that will use the x550. I want to have the future possibility of IEEE 1588 precision time protocol. The SDP (software defined pins) of the x550 will be connected to an FPGA for this purpose. Is it important to have an ultra accurate clock source on the x550 for 1588? Or does the FPGA require this clock source?
igb SR-IOV vf driver on FreeBSD strips VLAN tags
When I run FreeBSD as a KVM guest and assign it a vf from my 82576 card, the guest igb vf driver seems to strip vlan tags on incoming packets. If no packets are vlan tagged, they pass fine. Outgoing packets maintain vlan tags, but incoming packets have their vlan tags stripped. More information: https://forum.pfsense.org/index.php?topic=126742
The ixgbe had this bug as well, and it was apparently fixed https://lists.freebsd.org/pipermail/freebsd-bugs/2016-May/067788.html but the fix was never applied to the igb driver.
It seems the only solution would be to use a separate vf for each vlan I want the guest to see, and have the host handle vlan management, but this limits me to the number of vlans I can have and how many guests can use the NIC.
Is there any chance of having the ixgbe patch ported to igb?
An error occurred when updating NVM on X710 card
I have total 12 hosts running Windows Server 2012 R2 with a X710 card installed on each host. The current firmware version is 4.26.
When I updated NVM version 5.05, some of hosts failed with an error reported after flash has started. However there is not any error message logged on the log.
Do you want to save a backup copy of current NIC images? [Yes | No]: Y
Update process in progress. Please wait [*-........]
Tool execution completed with the following status: An error occurred ...
nvmupdatew64e.exe -l log.txt
Config file read.
Inventory
[00:006:00:00]: Intel(R) Ethernet Converged Network Adapter X710-4
EEPROM inventory started
Alternate MAC address is not set
EEPROM inventory finished
Flash inventory started
Flash inventory finished
OROM inventory started
OROM inventory finished
[00:006:00:01]: Intel(R) Ethernet Controller X710 for 10GbE SFP+
Device already inventoried.
[00:006:00:02]: Intel(R) Ethernet Controller X710 for 10GbE SFP+
Device already inventoried.
[00:006:00:03]: Intel(R) Ethernet Controller X710 for 10GbE SFP+
Device already inventoried.
Update
[00:006:00:00]: Intel(R) Ethernet Converged Network Adapter X710-4
Creating backup images in directory: 6805CA3112D8
Backup images created.
Flash update started
After the failure, the Windows Device Manager showed "This Device cannot start". The nvmupdate reported "Update not available" and the card version is blank.
Num Description Ver. DevId S:B Status
=== ======================================== ===== ===== ====== ===============
02) Intel(R) Ethernet Converged Network 1572 00:006 Update not
Adapter X710-4 available
I tried to boot to EFI shell and ran nvmupdate64e.efi. The adapter status also showed "Access error". It looks like the card cannot start due to incomplete NVM flash? Is there a way I can "get back" my card, such as force update the firmware even the card was down?
Re: ULP enable/disable utility. Where to get?
Can I get that too please?
I211 Gigabit Network adapter shows as Removable Device
I'm using a Gigabyte AX370A motherboard with an integrated Intel GbE LAN with Windows 10 64 bit.
The adapter works, but its appears in the system tray as a removable device. Its also listed under 'Unspecified' in the Devices and Printers page. I installed the drivers for it from Gigabyte's site first, then tried the latest release directly from Intel - no change.
intel nic drivers 19.3 huge 6000+ dpc latency spike once a few secs
hi, i would like to report that the new intel nic drivers version 19.3 that just released recently has huge 6000+ dpc latency spike once in a few secs.
my specs
Intel(R) 82579LM Gigabit Network Connection
windows 7 SP1 32bit + lastest windows update
i downgrade to intel nic drivers previous version 19.1 and the problem is just gone.
I211/I217-V Windows 10 LACP teaming fails
Hello,
after the update to Windows 10 (x64, Build 10240) the creation of a teaming group (static or IEEE802.3ad) with a I211+I217-V NIC fails.
Drivers have been upgraded to the latest version available and multiple reinstallations with reboots din't help either. Whenever the group creation wizzard is used and a groupname (several tried), the adapters and LACP have been selected, a Windows pop-up appears to tell me group creation has failed.
However the Windows Device Manager shows a newly created "Intel Advanced Network Services Virtual Adapter", so some kind of configuration seems to get done.
Using Windows 7 SP1 x64 the exact same setup worked flawlessly for months, so Win10/the driver are the likely culprit.
Is anyone experiencing similar problems and/or is this a known bug? Feedback on this issue is greatly appreciated.
Thanks in advance!
Kind regards,
Famaku
Ubuntu 16.04 and Intel XL710 SR-IOV - Packet drops
We have server running Ubuntu 16.4 KVM with Intel XL710 40Gbps NIC with SR-IOV on top of it.
On top of each 40G interfaces are created 4 SR-IOV VFs which are associated to VMs.
Problem is that after traffic load increases (approx 6Gbps on physical intf) we are experencing increased RTT and traffic drops.
VMs looks OK, switches and rest of the network also looks good and we are suspecting to SR-IOV.
How can I verify potential traffic drops on SR-IOV?
Has anybody similar expirience?
Kernel: 4.4.0-66-generic
driver=i40e driverversion=1.4.25-k duplex=full firmware=5.04
BR, Mate
XL710 can't split in FreeBSD
Hello.
Trying to split XL710QDA1 in FreeBSD 12 and get:
[root@host ~]# ./qcu64e /devices [18:42:30]
Intel(R) QSFP+ Configuration Utility
QCU version: v2.27.10.01
Copyright(C) 2016 by Intel Corporation.
Software released under Intel Proprietary License.
NIC Seg:Bus Ven-Dev Mode Adapter Name
=== ======= ========= ======= ==================================================
zsh: segmentation fault (core dumped) ./qcu64e /devices
[root@host ~]# gdb core qcu64e.core [18:43:21]
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "amd64-marcel-freebsd"...core: No such file or directory.
Core was generated by `./qcu64e /devices'.
Program terminated with signal 11, Segmentation fault.
#0 0x00000000005051b1 in ?? ()
(gdb)
I tryed fw versions 4.22.26225 and 5.0.40043 . Driver we use 1.6.6.
And I succesfully split it from UEFI.
Bonded networking throughput issues
I bought a Zyxel GS1900-16 and a compatible Intel network card with two ports for my windows 10 machine to create a team using an onboard intel nic and the two extra nics for a 3 port lacp team.
Two lacp groups are set up on my switch.
I also setup bonding on an ubuntu server that has two ports in the second lacp group.
I don't seem to be able to do more then 1gbps to my ubuntu machine doing multiple transfers. If I set the balancing to mac address on my switch, I can transfer from an upstream location and from my server that is across the local switch to reach more then 1gbps load on my windows 10 machine. Using the only other switch option which is mac/ip it only ever uses one of my three nics for all transfers.
I tried the various teaming options in the intel proset config thing on my nics to no additional throughput increases. Ubuntu is set in layer 3+4 mode. There is a static LAG option in my switch, but I haven't gotten that to work with the Intel config or ubuntu.
Looking for advice in order to utilize 2/3 ports to my 2 port ubuntu machine at once while doing multiple transfers.
Intel XL710
Hello,
I just wanted to inquire about the Intel XL710 40GbE Network Adapter.
I have been running some Speed Test using Two Servers and a Cisco Nexus
3064 40GbE Switch and Cisco Nexus 3132 40GbE Switch.
The Intel XL710 40GbE Network Adapter works with this set up but I am
experiencing slow speeds.
Can you please let me know what the optimal way to connect this Intel XL710
40GbE Network Adapter so that we can experience 40GbE Speeds?
The speeds I have been getting so far are pretty slow and I know this Intel
XL710 40GbE Network Adapter is much faster.
Thanks for your help in advance.
intel NICs for Audio Video Bridging (AVB/TSN)
Once I asked for products from intel capable of Audio Video Bridging (AVB/TSN)
https://communities.intel.com/thread/41956?wapkw=audio+video+bridging
Are there meanwhile other/more products available from intel capable of Audio Video Bridging (AVB/TSN)? Are there apart form I210/I211 other network adapters available for AVB/TSN from intel?
Please inform.
Rgds
AW
igb Detected Tx Unit Hang
Hi all experts :
we are using the Intel I350AM2 ( Two GE Ports ) chip to interface the Cortex-A8 SoC from TI through PCIE x 2 bus .
The OS version is Linux 2.6.37
The I350 driver version is 5.0.6
and after some hours , we found the log as below with GDB backtrace info.
igb 0000:01:00.0: Detected Tx Unit Hang
Tx Queue <0>
TDH <c9>
TDT <c9>
next_to_use <c9>
next_to_clean <de>
buffer_info[next_to_clean]
time_stamp <91811>
next_to_watch <ffc17df0>
jiffies <91b40>
desc.status <1568200>
------------[ cut here ]------------
WARNING: at net/sched/sch_generic.c:258 dev_watchdog+0x148/0x230()
NETDEV WATCHDOG: eth0 (igb): transmit queue 0 timed out
Modules linked in: aur5g8ke_face_lcd avst_digit_audio ti81xxhdmi ti81xxfb vpss osa_kermod syslink
Backtrace:
[<c004cfac>] (dump_backtrace+0x0/0x110) from [<c033900c>] (dump_stack+0x18/0x1c)
r6:c042b298 r5:00000102 r4:c0457df0 r3:60000113
[<c0338ff4>] (dump_stack+0x0/0x1c) from [<c0072910>] (warn_slowpath_common+0x54/0x6c)
[<c00728bc>] (warn_slowpath_common+0x0/0x6c) from [<c00729cc>] (warn_slowpath_fmt+0x38/0x40)
r8:c02c78bc r7:00000100 r6:00000000 r5:c04cb59c r4:cdc0c000
r3:00000009
[<c0072994>] (warn_slowpath_fmt+0x0/0x40) from [<c02c7a04>] (dev_watchdog+0x148/0x230)
r3:cdc0c000 r2:c042b2b0
[<c02c78bc>] (dev_watchdog+0x0/0x230) from [<c007cc1c>] (run_timer_softirq+0x130/0x1c8)
r6:00000100 r5:c0456000 r4:c04b7c40
[<c007caec>] (run_timer_softirq+0x0/0x1c8) from [<c00777b4>] (__do_softirq+0x84/0x114)
[<c0077730>] (__do_softirq+0x0/0x114) from [<c0077ba4>] (irq_exit+0x48/0x98)
[<c0077b5c>] (irq_exit+0x0/0x98) from [<c003f07c>] (asm_do_IRQ+0x7c/0x9c)
[<c003f000>] (asm_do_IRQ+0x0/0x9c) from [<c033aff4>] (__irq_svc+0x34/0xa0)
Exception stack(0xc0457f18 to 0xc0457f60)
7f00: c0496610 00000002
7f20: cbaa8000 5efe1920 cbaa801c c0459040 00000015 c982f8c0 ccd24300 413fc082
7f40: ccd24300 c0457f6c c0457f70 c0457f60 c033b460 c033d01c 60000013 ffffffff
r5:fa200000 r4:ffffffff
[<c033d010>] (atomic_notifier_call_chain+0x0/0x28) from [<c033b460>] (__switch_to+0x2c/0x4c)
[<c0339564>] (schedule+0x0/0x304) from [<c004a69c>] (cpu_idle+0x80/0x90)
[<c004a61c>] (cpu_idle+0x0/0x90) from [<c032d8dc>] (rest_init+0x60/0x78)
r6:c06d0900 r5:c002dd50 r4:c04babbc r3:00000000
[<c032d87c>] (rest_init+0x0/0x78) from [<c0008c08>] (start_kernel+0x264/0x2b8)
[<c00089a4>] (start_kernel+0x0/0x2b8) from [<80008048>] (0x80008048)
---[ end trace bb79dcc8c86613b8 ]---
we have tried to disable the offload option as below but not work , the I350AM2 still hang , btw , we only use the eth0 port now .
Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: on
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
is there any suggestions on this case ?
Issue with setting smp_affinity on ixgbe cards
Hi,
I am using a Dell PowerEdge R730 with Dual Xeon, each 22 cores, with 6 ixgbe compatible cards, on which I am running Linux with ixgbe driver version 4.4.0-k, using kernel versions both 4.7.10 and 4.9.6.
I am loading the ixgbe modules at boot time, bringing up the interfaces and setting smp_affinity to the cards, using the set_irq_affinity script, so all the possible RxTx IRQs are distributed between all the available cores.
The problem is that it happens, random, but quite often that the smp_affinity setting fails, and I need manually to re-run the script one or more times in order desired settings to be applied. There were also several occasions when the settings were not applied at all, and it took me several reboots to script to start working again.
The problem appears not only randomly as occurrence, but also at random NIC controllers, so I am excluding the possibility of failed HW, since I also changed NICs.
I added some debug messages to track the affinity setting in Linux kernel, and it turns out that most of the times when the setting fails the error that affinity setting function irq_do_set_affinity returns is EBUSY, but also sometimes it returns ENOSPC.
More investigation on the topic showed whenever EBUSY was returned the problem could be overcome with re-running the script. But if the error returned was ENOSPC, it takes several reboots for the problem to disappear.
In order to provide some more details on the system I am attaching two text files with the output of the modinfo of the ixgbe and lspci on the machine.
x710 firmware update
I have trouble updating firmware for X710DA4 card.
This card drops connection at random with Linux 4.9.9 driver.
The driver requires firmware update. But I tried all three versions
at Intel website and all said update not available.
Any help would be much appreciated.
ethtool -i ens4f1
driver: i40e
version: 2.0.19
firmware-version: 4.10 0x800011c5 0.0.0
expansion-rom-version:
bus-info: 0000:02:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
lspci does not show serial number:
Capabilities: [e0] Vital Product Data
Product Name: XL710 40GbE Controller
Read-only fields:
[PN] Part number:
[EC] Engineering changes:
[FG] Unknown:
[LC] Unknown:
[MN] Manufacture ID:
[PG] Unknown:
[SN] Serial number:
[V0] Vendor specific:
[RV] Reserved: checksum good, 0 byte(s) reserved
Read/write fields:
[V1] Vendor specific:
End
x710 SR-IOV problems
Hi all,
I have following baseline:
Dell R630 (2x14 core Xeon, 128GB RAM, 800GB SSD)
x710 4-port NIC, in 10Gbit mode
SUSE12SP1
Latest NIC firmware but default PF/VF drivers (came with OS, v1,3,4)
VF driver blacklisted on hypervisor
Setup according to official Intel and Suse documentation, KVM hypervisor
With test setup, single VM with single VF and untagged traffic, I could achieve basically line-rate numbers: with MTU 1500, there were about 770Kpps and BW of 9.4Gbps, achieved both for UDP and TCP traffic, with no packet drops. There is plenty of processing power, setup is nice and tidy and everything works as it should.
Production setup is a bit different: VM is using 3 VFs, one for each PF (4th PF is not being used). All VFs except first one use untagged traffic. First VF is passing two types of traffic: first one untagged (VLAN 119) and second one tagged (VLAN 1108). Tagging is done inside VM. Setup worked fine for some time, confirming test setup numbers. However, after some time following errors started to appear in hypervisor logs:
Mar 11 14:32:52 test_machine1 kernel: [10423.889924] i40e 0000:01:00.1: TXdriverissuedetected on VF 0
Mar 11 14:32:52 test_machine1 kernel: [10423.889925] i40e 0000:01:00.1: Too many MDD events on VF 0, disabled
And performance numbers became erratic: sometimes it worked perfectly, sometimes it did not. But most importantly, packet drops occured.
So, I've reinstalled everything (hypevisor and VMs), configured exactly as before using automated tools, but upgraded PF and VF drivers to latest ones (v2.0.19/v2.0.16). Errors in logs disappeared, but issue persists. Now I have this in logs:
2017-03-12T11:33:43.356014+01:00 test_machine1 kernel: [ 420.439112] i40e 0000:01:00.1: Unable to add VLAN filter 0 for VF 0, error -22
2017-03-12T11:33:43.376009+01:00 test_machine1 kernel: [ 420.459168] i40e 0000:01:00.0: Unable to add VLAN filter 0 for VF 0, error -22
2017-03-12T11:33:44.352009+01:00 test_machine1 kernel: [ 421.435124] i40e 0000:01:00.2: Unable to add VLAN filter 0 for VF 0, error -22
I've increased VM CPU count number, VF ring sizes, turnet off VF spoofcheck in hypervisor, VM linux software buffers, VM netdev.budget kernel parameter (amount of CPU time assinged for NIC processing) etc. but situation remains the same. Sometimes works perfectly, other time it does not.
Can you please provide some insight? Since rx_dropped counter is increasing in VM, I am suspecting driver/VF issue.
Is there a way to handle this problem, without switching to untagged traffic?
Thank you in advance,
Ante
Can't get information about Omnipath HFI on RHEL 7.3 hosts
Recently we got some new KNL nodes and decided to try RHEL 7.3 on these hosts running the 3.10.0-514.10.2.el7.x86_64 kernel.
After installing the IntelOPA-Basic software and upgrading the firmware on the HFI and rebooting the nodes, we still can't get anything other than the following from opainfo
[root@sknl0701 ~]# opainfo
oib_utils ERROR: [7534] open_verbs_ctx: failed to find verbs device
opainfo: Unable to open hfi:port 0:1
Even though the software and firmware never complains about any errors we can still see that even after forcing dracut to recreate the system image the hfi1 driver will not load.
[root@shas0101 ~]# lsmod | grep hfi1
hfi1 633634 1
rdmavt 57992 1 hfi1
ib_mad 47817 5 hfi1,ib_cm,ib_sa,rdmavt,ib_umad
ib_core 98787 14 hfi1,rdma_cm,ib_cm,ib_sa,iw_cm,xprtrdma,ib_mad,ib_ucm,rdmavt,ib_iser,ib_umad,ib_uverbs,ib_ipoib,ib_isert
i2c_algo_bit 13413 2 hfi1,mgag200
i2c_core 40582 6 drm,hfi1,ipmi_ssif,drm_kms_helper,mgag200,i2c_algo_bit
[root@sknl0701 ~]# modprobe -v hfi1
[root@sknl0701 ~]# lsmod | grep hfi1
hfi1 697628 0
rdmavt 63294 1 hfi1
ib_core 210381 13 hfi1,rdma_cm,ib_cm,iw_cm,rpcrdma,ib_ucm,rdmavt,ib_iser,ib_umad,ib_uverbs,rdma_ucm,ib_ipoib,ib_isert
i2c_algo_bit 13413 3 igb,hfi1,mgag200
i2c_core 40756 7 drm,igb,hfi1,ipmi_ssif,drm_kms_helper,mgag200,i2c_algo_bit
[root@sknl0701 ~]# yum info libibmad
Loaded plugins: product-id, search-disabled-repos, subscription-manager
Installed Packages
Name : libibmad
Arch : x86_64
Version : 1.3.12
Release : 1.el7
Size : 132 k
Repo : installed
From repo : curc
Summary : OpenFabrics Alliance InfiniBand MAD library
URL : http://openfabrics.org/
License : GPLv2 or BSD
Description : libibmad provides low layer IB functions for use by the IB diagnostic
: and management programs. These include MAD, SA, SMP, and other basic
: IB functions.
[root@sknl0701 ~]# yum info libibmad-devel
Loaded plugins: product-id, search-disabled-repos, subscription-manager
Installed Packages
Name : libibmad-devel
Arch : x86_64
Version : 1.3.12
Release : 1.el7
Size : 50 k
Repo : installed
From repo : curc
Summary : Development files for the libibmad library
URL : http://openfabrics.org/
License : GPLv2 or BSD
Description : Development files for the libibmad library.
libibmad was and has been installed on the new node as well, so I am out of ideas at the moment. Any help would be appreciated!
X540-T2 for Server 2016 Cluster Network
I have a couple of questions regarding the use of an Intel X540-T2 10G Ethernet card as the main network card in a Windows Server 2016 cluster:
- I cannot find official Intel drivers for the X540-T2 for Windows Server 2016. The page at Download Intel® Network Adapter Driver for Windows® 10 indicates that the drivers provided in that package are not compatible with Windows Server 2016. Can someone point me in the direction of Intel provided Server 2016 compatible drivers for the X540-T2 cards?
- I am using a pair of these cards as the main network cards for a Windows Server 2016 cluster running the File Server role to provide a highly available set of file shares. The cards are currently using the Microsoft in-box drivers. When I configure the File Server role in the cluster, there's an error relating to the ISATAP tunnelling address not being able to be brought online. The error that's shown in the event log is:
IPv6 tunnel address resource 'IP Address 2002:xxxx:xxxx:x:x:xxxx:aa.bb.cc.dd' failed to come online. Cluster network 'Cluster Network 1' associated with dependent IP address (IPv4) resource 'IP address aa.bb.cc.dd' does not support ISATAP tunnelling. Please ensure that the cluster network supports ISATAP tunnelling.This error was not showing for the cluster network itself, but has only appeared when I've configured the File Server Role. The error was also not showing for the same role that was running on an old pair of servers on Windows Server 2012 and using Broadcom NICs. I've searched online and there's not much out there about resolving this issue and I was wondering whether anyone has seen this issue before and has steps to resolve it?
Many thanks
Andy