Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

Customer Requirement: To implement VF RSS using ixgbevf driver. Customer wants to know how to turn on RSS support on ixgbevf. Customer Observation: Using 3.3.2 ixgbevf in guest and 5.0.4 ixgbe on host. I noticed that even though there are two RX queue

$
0
0

Customer Requirement: 

To implement VF RSS using ixgbevf driver.  Customer wants to know how to turn on RSS support on ixgbevf.

 

Customer Observation:

Using 3.3.2 ixgbevf in guest and 5.0.4 ixgbe on host. I noticed that even though there are two RX queues for each VF, but pretty much all the packets go to the first RX queue. The RSS related features are also not supported. How to turn on the RSS support on ixgbevf? If it does not support RSS, what's the point having multiple queues in ixgbevf?

 

To help customer, can I please request

 

So kindly requesting as

 

  1. 1) Clearly step by step showing as how they can turn on RSS support on ixgbevf?
  2. 2) In case kindly you can show a sample code where may be a function is really turning on RSS support on ixgbevf – that will be highly helpful.
  3. 3) May be a sample application is already doing it.
  4. 4) And last, what the limitation in RED means – even though RSS is workable in ixgbevf – what the limitation in red mean please? – so that they correctly set their expectation and use it correctly please

The limitation for VF RSS on Intel® 82599 10 Gigabit Ethernet Controller is: The hash and key are shared among PF and all VF, the RETA table with 128 entries is also shared among PF and all VF; So it could not to provide a method to query the hash and reta content per VF on guest, while, if possible, please query them on host(PF) for the shared RETA information.

                                                                                                                               


Will X710 firmware update 4.53 to 5.05 address sporadic transmit queue timeout?

$
0
0

We have experienced three occurrences on two servers of this error "tx_timeout" / "hung_queue", and packets stopped flowing for some number of seconds (but then recovered):

 

Apr 10 02:04:14 node39 kernel: WARNING: at net/sched/sch_generic.c:297 dev_watchdog+0x276/0x280()
Apr 10 02:04:14 node39 kernel: NETDEV WATCHDOG: p2p1 (i40e): transmit queue 8 timed out
...
Apr 10 02:04:14 node39 kernel: CPU: 0 PID: 0 Comm: swapper/0 Tainted: G OE ------------ 3.10.0-514.6.1.el7.x86_64 #1
Apr 10 02:04:14 node39 kernel: Hardware name: Dell Inc. PowerEdge R620/01W23F, BIOS 2.1.3 11/20/2013
...
Apr 10 02:04:14 node39 kernel: i40e 0000:42:00.0 p2p1: tx_timeout: VSI_seid: 390, Q 8, NTC: 0x113, HWB: 0x116, NTU: 0x116, TAIL: 0x116, INT: 0x1
Apr 10 02:04:14 node39 kernel: i40e 0000:42:00.0 p2p1: tx_timeout recovery level 1, hung_queue 8
Apr 10 02:04:14 node39 kernel: i40e 0000:42:00.0 p2p1: adding 3c:fd:fe:9f:b7:48 vid=0

 

This is within first 3 weeks of usage of Intel X710 duo adapters running firmware 4.53 (with supported Intel SFP+) recently installed in a cluster of two-year-old Dell R620s, running CentOS 7.3:

 

node39:/# lspci -vv | grep -A 1 10GbE
pcilib: sysfs_read_vpd: read failed: Input/output error
05:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)        Subsystem: Intel Corporation Ethernet Converged Network Adapter X710-2
--
05:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)        Subsystem: Intel Corporation Ethernet Converged Network Adapter X710
--
42:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)        Subsystem: Intel Corporation Ethernet Converged Network Adapter X710-2
--
42:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)        Subsystem: Intel Corporation Ethernet Converged Network Adapter X710
node39:/usr/local/bin# ethtool -i p2p1
driver: i40e
version: 1.5.10-k
firmware-version: 4.53 0x8000206e 0.0.0
expansion-rom-version:
bus-info: 0000:42:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

 

We have used X710s without issue in a few other servers, but in those cases they are HP OEM, and running firmware 4.60:

 

node93:/# lspci -vv |grep -A 1 10GbE
04:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)        Subsystem: Hewlett-Packard Company HP Ethernet 10Gb 2-port 562FLR-SFP+ Adapter
--
04:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)        Subsystem: Hewlett-Packard Company Ethernet 10Gb 562SFP+ Adapter
--
05:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)        Subsystem: Hewlett-Packard Company HP Ethernet 10Gb 2-port 562SFP+ Adapter
--
05:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)        Subsystem: Hewlett-Packard Company Ethernet 10Gb 562SFP+ Adapter
node93:/# ethtool -i ens2f0
driver: i40e
version: 1.5.10-k
firmware-version: 4.60 0x80001f47 1.3072.0
expansion-rom-version:
bus-info: 0000:05:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

 

I have downloaded nvmupdate64e and updated a spare Dell to firmware 5.05, so if this is the correct solution I have confirmed the procedure. However threads such as this one Intel X710 vs VMWare ESX: crash and reboot  give me pause-- crash and reboot would certainly be worse than a 10-20 second transmit hang.

 

My questions are:

 

  1. Has anyone else experienced these tx_timeout / hung_queue issues?
  2. Is it a known issue? If so, is it an issue with firmware, with i40e driver, or something else such as TSO/GSO (which are currently ON but I could turn them off).
  3. If it is an issue with firmware, has it been corrected between versions 4.53 and 4.60, and is it recommended to flash production machines to 5.05, or to some other version. I could not find a detailed Change List.
  4. Is there a way (such as generating high data rates using iperf) to make the sporadic issues occur reproducibly, so that I can demonstrate whether any attempted solution has been successful.

 

Thanks in advance!

QSFP+ Configuration modification is not supported by this adapter for XL710 QDA2

$
0
0

Hi ,

I was  trying to set up Intel XL710 in 2x40 mode using the QCU but I am getting the error ""QSFP+ Configuration modification is not supported by this adapter."

Although If I try to check the current configuration it shows the mode as N/A.

 

All the steps that I followed:

 

[root@ndr730l:/tmp/EFIx64] ./qcu64e /devices

Intel(R) QSFP+ Configuration Utility

QCU version: v2.27.10.01

Copyright(C) 2016 by Intel Corporation.

Software released under Intel Proprietary License.

 

NIC Seg:Bus Ven-Dev Mode    Adapter Name

=== ======= ========= ======= ==================================================

1) 000:005 8086-1583 N/A Intel(R) Ethernet Converged Network Adapter XL710-

 

[root@ndr730l:/tmp/EFIx64] ./qcu64e /info /nic=1

Intel(R) QSFP+ Configuration Utility

QCU version: v2.27.10.01

Copyright(C) 2016 by Intel Corporation.

Software released under Intel Proprietary License.

QSFP+ Configuration modification is not supported by this adapter.

 

MAC Address: 3CFDFE16FE80

Serial Number: XL710QDA2BLK

 

Firmware details:

ethtool -i vmnic4

driver: i40e

version: 1.4.28

firmware-version: 5.04 0x800024d8 17.5.10

bus-info: 0000:05:00.0

windows 10 Dell Optiplex 755 Intel 82566DM Ipv4 not connected

$
0
0

I can't get connected to the internet. I get the message not connected. Connection on other computers working fine. I think it started when I tried to updat the driver. Intel says adapter not supported for windows 10.  I can't find the generic driver. Also, get a error 651. Can anybody help?

Thanks

Dick Prosser

deprosser@comcast.net

Can I use Direct Cable when connecting PC to PC?

$
0
0

When connecting PC and PC directly

In general, a cross cable is used.

 

 

I saw a PC to PC connection via a direct cable.

We confirmed that the ping test works normally.

 

 

There seems to be a separate LAN card to support this ...

 

 

Is there a way to check in the Network attribute?

Or is there a supported Chipset series?

Deployment Drivers Missing DEV_15D8

$
0
0

Hi.

 

When deploying to the new 5th Gen Lenovo X1 Carbon, I keep getting an error stating that the network driver isn't installed - please see below:

pci ven_8086&dev_15D8&SUBSYS_224F17AA&REV_21

 

VEN_8086 = Intel

DEV_15D8 = UNKNOWN

 

I got the drivers here ( closest thing i could find ).

Download Intel Ethernet Gigabit Adapter Driver 12.15.24.1 for Windows 10 64 bit

 

Before the drivers weren't installed on the WIM file, so i used DISM to inject the drivers into the WIM file and managed to successfully boot from network

 

I installed the drivers and managed to the deployment screen to setup the deployment ready. Once the deployment is going and the windows 10 image has applied, before any of the applications can install i get the same error saying the network drivers hasn't been installed and you can see from the device manager that no network drivers have been installed, even Wi-Fi. Once on the machine, if i install the drivers from the link above, the Ethernet works fine.

 

Please can you let me know the correct drivers to use for the deployment of this network drivers.

802.1ad on ixgbe vfs

$
0
0

Hi,

I am trying to configure 802.1ad on ixgbe vfs with a goal of moving resultant q-in-q interface into a container i.e. have the vf in default namespace and have all the vf's vlan interfaces being moved to different namespaces. Theoretically in this configuration, packets would be double-tagged coming into pf, be directed to correct vf (and outer tag stripped) and then delivered to linux with single-tag.

 

I tried this:

 

# echo 32 > /sys/class/net/eth1/device/sriov_numvfs

# ip link set dev eth1 up

# ip link set dev eth1 vf 1 vlan 2 proto 802.1ad

RTNETLINK answers: Protocol not supported

 

Is there a way I can do this ?

 

TIA,

Don.

I210 BCM5241: Link to BCM PHY works, no link from PHY to I210

$
0
0

We have a simple on board I210 in copper 10/100 mode connected to a BCM5241 PHY. The BCM PHY sees link at 100M from the I210, but the I210 cannot see link from the BCM PHY. So, TX from the I210 functions, but RX on the I210 has no link. We have attempted turning off and on internal RX AC bypass caps on the I210, but still do not see link.

 

The MDI traces are connected directly to one another on the the PCB. I210 MDI0 is the TX and I210 MDI1 is the RX. There are no magnetics in-between.

 

The signal from the I210 looks good on a scope, but looking at the receive side, it is very noisy. Has anyone encountered this issue?


Re: Deployment Drivers Missing DEV_15D8

$
0
0

Dear Intel Corporation, I still have no solution. HP doesn't provide correct drivers. Can't Intel create PXE-drivers? We have the devices now almost a month and have to install it one by one. This is not desirable.

X710 - VFs - TX driver issue detected, PF reset issued, when running iperf3

$
0
0

Hi,

 

* In summary:

When I run iperf3 between 2 machines throgh SRIO VFs interface and I got the following error in dmesg:

 

TX driver issue detected, PF reset issued

 

* How to setup and detail results

+ I got 2 machines each with 1 X710 4ports 10G cards.  These 2 X710 cards has direct cable connect between them.

+ On machine 1, I run the following commands to create VF0 and setup its IP. VF0 has pci address of 0000:04:0a.0

    $ echo 1 > /sys/bus/pci/devices/0000\:04\:00.2/sriov_numvfs

    $ ifconfig enp4s0f2 mtu 9700

    $ ifconfig enp4s10 1.1.1.1 netmask 255.255.255.0 mtu 9216

  $ echo 'msg_enable 0xffff' > /sys/kernel/debug/i40e/0000\:04\:00.2/command

 

+ On machine 2, I run the following commands to create VF0 and setup its IP. VF0 has pci address of 0000:04:0a.0

    $ echo 1 > /sys/bus/pci/devices/0000\:04\:00.2/sriov_numvfs

    $ ifconfig enp4s0f2 mtu 9700

    $ ifconfig enp4s10 1.1.1.2 netmask 255.255.255.0 mtu 9216

 

+ I was able to ping 1.1.1.2 from machine 1 and ping 1.1.1.1 from machine 2

+ On machine 2, start iperf3 server:

   $ iperf3 -s -p 8000

 

+ On machine 1, start iperf3 client:

   $ iperf3 -c 1.1.1.12 -p 8000 -l 64 -p 4

   $ dmesg

 

[ 5895.270158] i40e 0000:04:00.2: Malicious Driver Detection event 0x00 on TX queue 68 PF number 0x02 VF number 0x40

[ 5895.270166] i40e 0000:04:00.2: TX driver issue detected, PF reset issued

[ 5895.270170] i40e 0000:04:00.2: TX driver issue detected on VF 0

[ 5895.270173] i40e 0000:04:00.2: Too many MDD events on VF 0, disabled

[ 5895.270176] i40e 0000:04:00.2: Use PF Control I/F to re-enable the VF

[ 5895.284494] i40evf 0000:04:0a.0: PF reset warning received

[ 5895.284500] i40evf 0000:04:0a.0: Scheduling reset task

[ 5895.334703] i40e 0000:04:00.2: Query for DCB configuration failed, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EPERM

[ 5895.334711] i40e 0000:04:00.2: DCB init failed -53, disabled

[ 5895.612086] i40e 0000:04:00.2: Malicious Driver Detection event 0x00 on TX queue 67 PF number 0x02 VF number 0x40

[ 5895.612096] i40e 0000:04:00.2: Too many MDD events on VF 0, disabled

[ 5895.612098] i40e 0000:04:00.2: Use PF Control I/F to re-enable the VF

[ 5895.640978] i40e 0000:04:00.2: Invalid message from VF 0, opcode 3, len 4

 

* X710 card ports info

04:00.0 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

04:00.1 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

04:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

04:00.3 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

 

* This is driver info of PF X710:

driver: i40e

version: 1.4.25-k

firmware-version: 5.04 0x80002549 0.0.0

expansion-rom-version:

bus-info: 0000:04:00.2

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

* This is the PF X710 card info:

04:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

  Subsystem: Intel Corporation Ethernet Converged Network Adapter X710

  Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+

  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

  Latency: 0, Cache Line Size: 256 bytes

  Interrupt: pin A routed to IRQ 35

  Region 0: Memory at dc800000 (64-bit, prefetchable) [size=8M]

  Region 3: Memory at dc7f8000 (64-bit, prefetchable) [size=32K]

  Expansion ROM at df380000 [disabled] [size=512K]

  Capabilities: [40] Power Management version 3

  Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)

  Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-

  Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+

  Address: 0000000000000000  Data: 0000

  Masking: 00000000  Pending: 00000000

  Capabilities: [70] MSI-X: Enable+ Count=129 Masked-

  Vector table: BAR=3 offset=00000000

  PBA: BAR=3 offset=00001000

  Capabilities: [a0] Express (v2) Endpoint, MSI 00

  DevCap: MaxPayload 2048 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us

  ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+

  DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+

  RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop- FLReset-

  MaxPayload 128 bytes, MaxReadReq 512 bytes

  DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-

  LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L1, Exit Latency L0s <2us, L1 <16us

  ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+

  LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+

  ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-

  LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

  DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported

  DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled

  LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-

  EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-

  Capabilities: [e0] Vital Product Data

  Product Name: XL710 40GbE Controller

  Read-only fields:

  [PN] Part number:

  [EC] Engineering changes:

  [FG] Unknown:

  [LC] Unknown:

  [MN] Manufacture ID:

  [PG] Unknown:

  [SN] Serial number:

  [V0] Vendor specific:

  [RV] Reserved: checksum good, 0 byte(s) reserved

  Read/write fields:

  [V1] Vendor specific:

  End

  Capabilities: [100 v2] Advanced Error Reporting

  UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-

  UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-

  CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

  CEMsk: RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ NonFatalErr+

  AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-

  Capabilities: [140 v1] Device Serial Number 90-9c-a4-ff-ff-fe-fd-3c

  Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)

  ARICap: MFVC- ACS-, Next Function: 3

  ARICtl: MFVC- ACS-, Function Group: 0

  Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)

  IOVCap: Migration-, Interrupt Message Number: 000

  IOVCtl: Enable+ Migration- Interrupt- MSE+ ARIHierarchy-

  IOVSta: Migration-

  Initial VFs: 32, Total VFs: 32, Number of VFs: 1, Function Dependency Link: 02

  VF offset: 78, stride: 1, Device ID: 154c

  Supported Page Size: 00000553, System Page Size: 00000001

  Region 0: Memory at 00000000dc400000 (64-bit, prefetchable)

  Region 3: Memory at 00000000dc380000 (64-bit, prefetchable)

  VF Migration: offset: 00000000, BIR: 0

  Capabilities: [1a0 v1] Transaction Processing Hints

  Device specific mode supported

  No steering table available

  Capabilities: [1b0 v1] Access Control Services

  ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  Kernel driver in use: i40e

  Kernel modules: i40e

04:00.2 Ethernet controller: Intel Corporation Ethernet Controller X710 for 10GbE SFP+ (rev 01)

  Subsystem: Intel Corporation Ethernet Converged Network Adapter X710

  Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+

  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

  Latency: 0, Cache Line Size: 256 bytes

  Interrupt: pin A routed to IRQ 35

  Region 0: Memory at dc800000 (64-bit, prefetchable) [size=8M]

  Region 3: Memory at dc7f8000 (64-bit, prefetchable) [size=32K]

  Expansion ROM at df380000 [disabled] [size=512K]

  Capabilities: [40] Power Management version 3

  Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)

  Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-

  Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+

  Address: 0000000000000000  Data: 0000

  Masking: 00000000  Pending: 00000000

  Capabilities: [70] MSI-X: Enable+ Count=129 Masked-

  Vector table: BAR=3 offset=00000000

  PBA: BAR=3 offset=00001000

  Capabilities: [a0] Express (v2) Endpoint, MSI 00

  DevCap: MaxPayload 2048 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us

  ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+

  DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+

  RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop- FLReset-

  MaxPayload 128 bytes, MaxReadReq 512 bytes

  DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-

  LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L1, Exit Latency L0s <2us, L1 <16us

  ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+

  LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+

  ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-

  LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

  DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported

  DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled

  LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-

  EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-

  Capabilities: [e0] Vital Product Data

  Product Name: XL710 40GbE Controller

  Read-only fields:

  [PN] Part number:

  [EC] Engineering changes:

  [FG] Unknown:

  [LC] Unknown:

  [MN] Manufacture ID:

  [PG] Unknown:

  [SN] Serial number:

  [V0] Vendor specific:

  [RV] Reserved: checksum good, 0 byte(s) reserved

  Read/write fields:

  [V1] Vendor specific:

  End

  Capabilities: [100 v2] Advanced Error Reporting

  UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-

  UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-

  CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

  CEMsk: RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ NonFatalErr+

  AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-

  Capabilities: [140 v1] Device Serial Number 90-9c-a4-ff-ff-fe-fd-3c

  Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)

  ARICap: MFVC- ACS-, Next Function: 3

  ARICtl: MFVC- ACS-, Function Group: 0

  Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)

  IOVCap: Migration-, Interrupt Message Number: 000

  IOVCtl: Enable+ Migration- Interrupt- MSE+ ARIHierarchy-

  IOVSta: Migration-

  Initial VFs: 32, Total VFs: 32, Number of VFs: 1, Function Dependency Link: 02

  VF offset: 78, stride: 1, Device ID: 154c

  Supported Page Size: 00000553, System Page Size: 00000001

  Region 0: Memory at 00000000dc400000 (64-bit, prefetchable)

  Region 3: Memory at 00000000dc380000 (64-bit, prefetchable)

  VF Migration: offset: 00000000, BIR: 0

  Capabilities: [1a0 v1] Transaction Processing Hints

  Device specific mode supported

  No steering table available

  Capabilities: [1b0 v1] Access Control Services

  ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  Kernel driver in use: i40e

  Kernel modules: i40e

 

* This is the VF X710 card info:

04:0a.0 Ethernet controller: Intel Corporation XL710/X710 Virtual Function (rev 01)

  Subsystem: Intel Corporation XL710/X710 Virtual Function

  Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-

  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

  Latency: 0

  Region 0: [virtual] Memory at dc400000 (64-bit, prefetchable) [size=64K]

  Region 3: [virtual] Memory at dc380000 (64-bit, prefetchable) [size=16K]

  Capabilities: [70] MSI-X: Enable+ Count=5 Masked-

  Vector table: BAR=3 offset=00000000

  PBA: BAR=3 offset=00002000

  Capabilities: [a0] Express (v2) Endpoint, MSI 00

  DevCap: MaxPayload 2048 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us

  ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+

  DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-

  RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-

  MaxPayload 128 bytes, MaxReadReq 128 bytes

  DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-

  LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM L1, Exit Latency L0s <2us, L1 <16us

  ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+

  LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-

  ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-

  LnkSta: Speed unknown, Width x0, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-

  DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported

  DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled

  LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-

  EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-

  Capabilities: [100 v2] Advanced Error Reporting

  UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  UESvrt: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

  CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-

  CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-

  AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-

  Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)

  ARICap: MFVC- ACS-, Next Function: 0

  ARICtl: MFVC- ACS-, Function Group: 0

  Capabilities: [1a0 v1] Transaction Processing Hints

  Device specific mode supported

  No steering table available

  Capabilities: [1d0 v1] Access Control Services

  ACSCap: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  ACSCtl: SrcValid- TransBlk- ReqRedir- CmpltRedir- UpstreamFwd- EgressCtrl- DirectTrans-

  Kernel driver in use: i40evf

  Kernel modules: i40evf

 

* This is driver info of VF X710:

driver: i40evf

version: 1.4.15-k

firmware-version: N/A

expansion-rom-version:

bus-info: 0000:04:0a.0

supports-statistics: yes

supports-test: no

supports-eeprom-access: no

supports-register-dump: no

supports-priv-flags: yes

 

* OS info:

DISTRIB_ID=Ubuntu

DISTRIB_RELEASE=16.04

DISTRIB_CODENAME=xenial

DISTRIB_DESCRIPTION="Ubuntu 16.04.1 LTS"

 

Please let me know if you need more info.

 

Thanks!

X540-T1 - Device stopped (code 43)

$
0
0

Hi there,

 

I have 2x the same setup of one X540-T1 network card in a Logic Supply MK100B-50 (ASRock IMB-181-D Motherboard) and in both computers, after a certain amount of usage, the X540-T1 card stops working. When I go in the Windows Device Manager it says "Windows has stopped this device because it has reported problems. (code 43)"

 

I can disable and re-enable the device to make it work but only temporarly. The problem comes back.

I've also installed the latest intel driver. It says it is version 4.0.215.0 which was installed with the file: IntelNetworkDrivers-PROWinx64.exe.

 

Should this work?

Any ideas of how to fix this?

 

Thank you.

 

IntelX540-T1 error - 2017-06-02 16_41_41-.png

X540-T2 Link is Down

$
0
0

I have X540-T2 adapter on ubuntu 14.04 server and netgear xs708e swich  but at last one time in 24h I have link down messasge in console.

 

ixgbe 0000:04:00.0: eth0: NIC Link is Down

ixgbe 0000:04:00.0: eth0: NIC Link is Up 10 Gbps, Flow Control: RX/TX

ixgbe 0000:04:00.0: eth1: NIC Link is Down

ixgbe 0000:04:00.0: eth1: NIC Link is Up 10 Gbps, Flow Control: RX/TX

 

I updated to latest driver and firmware but no luck.

 

How to fix this problem?

I219-LM Change Tx/Rx buffer size in Windows 7

$
0
0

Hello,

 

I am using the newest driver for the Intel I219-LM network adapter in Windows 7. I need to change the size of the receive and
transmit buffers to: 

 

Receive buffers : 512

Transmit buffers: 128

 

I can’t seem to find the option to change the parameters !  

For the Intel 82579LM the option is easily available under the “advanced tab” for “network connection properties”.

 

Any suggestions?

 

 

Thanks 

 

Regards

 

Jonas

XL710 NVM v5.0.5 - TX driver issue detected, PF reset issued & I40E_ERR_ADMIN_QUEUE_ERROR

$
0
0

I've been having issues with TX driver issue detected, PF reset issued and fail to add cloud filter for quite some time across 12 VMware ESXi v6.0 hosts. About once a week the result is a purple screen of death (PSOD).

 

I recently upgraded the XL710 to NVM Firmware v5.0.5 and the VMware ESXi XL710 driver to the latest v2.0.6 on 4 of the 12 and the issues persist.

 

# ethtool -i vmnic2

driver: i40e

version: 2.0.6

firmware-version: 5.05 0x800028a6 1.1568.0

bus-info: 0000:03:00.0

 

Q. In trying to identify the culprit of , how do I identify the VM by filter_id?

Q. What is causing the "TX driver issue detected, PF reset issued"?

Q. How can I further troubleshoot to resolve the issue?

 

Here is just a snippet of /var/log/vmkernel.log. The logs are filled with the same repeating error messages:

Note the frequency of the error messages (~50 per minute!)

 

2017-05-26T16:01:04.659Z cpu26:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:04.659Z cpu26:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:04.660Z cpu26:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:04.660Z cpu26:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:05.347Z cpu11:33354)<6>i40e 0000:05:00.2: TX driver issue detected, PF reset issued

2017-05-26T16:01:05.538Z cpu38:33367)<6>i40e 0000:05:00.2: i40e_open: Registering netqueue ops

2017-05-26T16:01:05.547Z cpu38:33367)IntrCookie: 1915: cookie 0x38 moduleID 4111 <i40e-vmnic4-TxRx-0> exclusive, flags 0x25

2017-05-26T16:01:05.556Z cpu38:33367)IntrCookie: 1915: cookie 0x39 moduleID 4111 <i40e-vmnic4-TxRx-1> exclusive, flags 0x25

2017-05-26T16:01:05.566Z cpu38:33367)IntrCookie: 1915: cookie 0x3a moduleID 4111 <i40e-vmnic4-TxRx-2> exclusive, flags 0x25

2017-05-26T16:01:05.575Z cpu38:33367)IntrCookie: 1915: cookie 0x3b moduleID 4111 <i40e-vmnic4-TxRx-3> exclusive, flags 0x25

2017-05-26T16:01:05.585Z cpu38:33367)IntrCookie: 1915: cookie 0x3c moduleID 4111 <i40e-vmnic4-TxRx-4> exclusive, flags 0x25

2017-05-26T16:01:05.594Z cpu38:33367)IntrCookie: 1915: cookie 0x3d moduleID 4111 <i40e-vmnic4-TxRx-5> exclusive, flags 0x25

2017-05-26T16:01:05.604Z cpu38:33367)IntrCookie: 1915: cookie 0x3e moduleID 4111 <i40e-vmnic4-TxRx-6> exclusive, flags 0x25

2017-05-26T16:01:05.613Z cpu38:33367)IntrCookie: 1915: cookie 0x3f moduleID 4111 <i40e-vmnic4-TxRx-7> exclusive, flags 0x25

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 1 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 2 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 3 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 4 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 5 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 6 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 7 not allocated

2017-05-26T16:01:05.660Z cpu26:32886)<6>i40e 0000:05:00.2: Netqueue features supported: QueuePair   Latency Dynamic Pre-Emptible

2017-05-26T16:01:05.660Z cpu26:32886)<6>i40e 0000:05:00.2: Supporting next generation VLANMACADDR filter

2017-05-26T16:01:09.659Z cpu21:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:09.659Z cpu21:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:09.660Z cpu21:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:09.660Z cpu21:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:14.659Z cpu21:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:14.659Z cpu21:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:14.660Z cpu21:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:14.660Z cpu21:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:19.659Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:19.659Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:19.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:19.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:24.659Z cpu24:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:24.659Z cpu24:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:24.661Z cpu24:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:24.661Z cpu24:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:29.659Z cpu22:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:29.659Z cpu22:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:29.660Z cpu22:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:29.660Z cpu22:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:34.659Z cpu23:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:34.659Z cpu23:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:34.660Z cpu23:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:34.660Z cpu23:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:39.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:39.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:39.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:39.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:44.659Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:44.659Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:44.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:44.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:49.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:49.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:49.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:49.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:54.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:54.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:54.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:54.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:59.659Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:59.659Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:59.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:59.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:04.659Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:04.659Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:04.660Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:04.660Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:09.659Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:09.659Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:09.660Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:09.660Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:14.661Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:14.661Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:14.662Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:14.662Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:19.659Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:19.659Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:19.660Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:19.660Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:24.660Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:24.660Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:24.661Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:24.661Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:29.661Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:29.661Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:29.662Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:29.662Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:34.659Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:34.659Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:34.660Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:34.660Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:39.661Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:39.661Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:39.662Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:39.662Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:44.661Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:44.661Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:44.662Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:44.662Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:49.661Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:49.661Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:49.662Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:49.662Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:54.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:54.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:54.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:54.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:59.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:59.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:59.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:59.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:04.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:04.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:04.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:04.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:09.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:09.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:09.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:09.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:14.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:14.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:14.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:14.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:19.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:19.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:19.663Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:19.663Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:24.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:24.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:24.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:24.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:29.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:29.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:29.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:29.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:34.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:34.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:34.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:34.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:39.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:39.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:39.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:39.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:44.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:44.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:44.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:44.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:49.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:49.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:49.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:49.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:54.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:54.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:54.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:54.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:59.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:59.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:59.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:59.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

I219-V, no connection.

$
0
0

Hi, I'm french so I'm already sorry for all the mistakes I'm going to make

 

So I built my pc (I7 7700K, GTX 1080, MSI Z70, WIN10), everything was going fine the first 3 days but last sunday I lost the connection, just like this.

I rebooted my router severals times, same for my pc but nothin. I installed the drivers from Intel website, the latest drivers and also the previous, still nothin. I have this message almost every time, The card Intel(R) Ethernet Connection (2) I219-V meet some drivers problem or equipment.

I decided to re-install WIN10, my connection came back but just for few minutes, at this time I re-installed WIN10 3 times (Ubuntu too) and weirdly the last time the connetion remains several hours but only at 10 Mbps and when I tried to play a video game (first purpose of my pc) my connection crashed 30second or 1 min after I joined a party and hardly comeback. Now I don't have any connections.

By the way, my network card had a lot of trouble to recognize my wires, I used 3 differents wires but my card said that my wires are not connected or damaged. I also tried to connect my new pc with my previous pc linked by Ethernet wire, I had the same problem : my wire is not connected or damaged.

 

I tried a lot of solutions but nothin is really working for now, I think my network card should be replaced but it takes 3 weeks at least and I would like to find another solution.

 

Thank you for reading this post.


bridge mode issue with i40e vf

$
0
0

Hi,

   I have a network topology as the following:

   

PF1 and PF2, PF3 and PF4 is connected directly.

VM1, VM2, VM3 are all passthrough a VF from these PFs.

The two vfs in VM2 are all added into a linux bridge "br0".

VM1 ip address:192.168.1.2

VM3 ip address:192.168.1.3

 

VM1 VM2 VM3: CentOS 7.0

VF Driver:i40evf

Version: 2.0.22

 

PF Driver:i40e

Version:2.0.23

 

If VM3 ping VM1, There are a lot of arp on br0 like a broadcast strom.

 

Is there any way that this behaviour can be disabled? It causes severe problems that can escalate into packet storms for us.

 

[root@localhost ~]# tcpdump -i br0 -e arp

tcpdump: WARNING: br0: no IPv4 address assigned

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on br0, link-type EN10MB (Ethernet), capture size 65535 bytes

03:49:52.158874 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159042 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159097 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159139 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159187 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159269 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159280 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159322 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159390 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159433 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159484 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159521 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159560 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159578 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159588 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159602 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159623 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159645 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.159922 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.160161 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.160486 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.160607 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.160640 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.160971 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.161217 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

03:49:52.161357 fc:e1:fb:0a:ff:19 (oui Unknown) > Broadcast, ethertype ARP (0x0806), length 60: Request who-has 192.168.1.2 tell 192.168.1.3, length 46

Module eeprom with i40e and XL710-QDA1

$
0
0

Hi Intel,

 

I would like to read-out the module eeprom of QSFP transceiver connected to a XL710-QDA1 NIC.

The naive approach failed:  ethtool -m eth0. It reports: "Cannot get module EEPROM information: Operation not supported"

Is this feature supported by the NIC and the Intel driver i40e?

 

I am using the following driver and firmware:

driver version: 2.0.23

firmware-version: 5.05 0x800028a2 1.1568.0

 

Thanks and kind regards

Thomas Dey

X540-T2 VF cannot receive packet with OVS

$
0
0

I have configured host with two PF and 16 x2 VF in debian system. It worked well if I use the PF/VF as standalone NIC, but not functional with OVS bridge.

 

The testing scenario as below:

 

eth0 (PF): 10.250.11.103/24

eth11 (VF 9): 10.250.11.113/24

 

   Bridge "vmbr1"

        Port "eth10"

            Interface "eth10" (VF 8)

        Port "vmbr1"

            Interface "vmbr1"

                type: internal

        Port "in1"

            Interface "in1"

                type: internal

    ovs_version: "2.6.0"

 

in1: 10.250.11.34/24

 

I try the following icmp test:

 

ping 10.250.11.254 (reply good)

ping 10.250.11.254 -I eth11 (reply good)

ping 10.250.11.254 -I in1 (NO response)

 

ip link show my VF as:

-------------------------------

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000

    link/ether 0c:c4:7a:df:36:c6 brd ff:ff:ff:ff:ff:ff

    vf 0 MAC 6e:74:9e:65:58:eb, spoof checking on, link-state auto

    vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 4 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 5 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 6 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 7 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 8 MAC da:c7:06:0b:5b:1f, spoof checking on, link-state auto

    vf 9 MAC ea:97:12:4e:b4:6b, spoof checking on, link-state auto

    vf 10 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 11 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 12 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 13 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 14 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

    vf 15 MAC 00:00:00:00:00:00, spoof checking on, link-state auto

 

arp table shown as:

-------------------------------

Address                  HWtype  HWaddress           Flags Mask            Iface

10.250.11.254            ether   6c:3b:6b:e9:7c:e1   C                     eth11

10.250.11.254            ether   6c:3b:6b:e9:7c:e1   C                     eth0

10.250.11.254            ether   6c:3b:6b:e9:7c:e1   C                     in1

 

But arp is not stable, it disappear very quickly:

------------------------------------------------------------

Address                  HWtype  HWaddress           Flags Mask            Iface

10.250.11.254                   (incomplete)                              in1

10.250.11.254            ether   6c:3b:6b:e9:7c:e1   C                     eth0

10.250.11.254            ether   6c:3b:6b:e9:7c:e1   C                     eth11

 

The strange thing is:

ping via in1 will not fill up the arp table for in1 Iface (always show incomplete), but ping via other two iface seems fill the mac address into in1 some time.

I have tcpdump to in1 and found it send out ARP request broadcast, but did not get any ARP reply from host 254.

 

ovs-appctl fdb/show vmbr1:

-----------------------------------------

port  VLAN  MAC                Age

    2     0  72:fc:b4:3a:e6:0f   78

    1     0  6c:3b:6b:e9:7c:e1   38

    1     0  6c:3b:6b:f5:96:e6    3

    1     0  88:43:e1:dd:9c:80    1

This looks persist and will not change whatever I ping via any Iface.

 

I have also try to change VF port, but seems none of them work with OVS bridge.

But the bridge worked if assigned the port to eth0 or eth1 PF.

 

uname -a:

Linux pve3 4.4.62-1-pve #1 SMP PVE 4.4.62-88 (Thu, 18 May 2017 09:18:43 +0200) x86_64 GNU/Linux

 

/etc/network/interfaces:

----------------------------------

auto eth0

iface eth0 inet static

        address  10.250.11.103

        netmask  255.255.255.0

        gateway  10.250.11.254

 

auto eth11

iface eth11 inet static

        address  10.250.11.113

        netmask  255.255.255.0

 

allow-vmbr1 eth10

iface eth10 inet manual

        ovs_type OVSPort

        ovs_bridge vmbr1

 

auto vmbr1

iface vmbr1 inet manual

        ovs_type OVSBridge

        ovs_ports eth10 in1

 

allow-vmbr1 in1

iface in1 inet static

        address  10.250.11.34

        netmask  255.255.255.0

        ovs_type OVSIntPort

        ovs_bridge vmbr1

 

Here is ethtool info for eth10:

-------------------------------

# ethtool -i eth10

driver: ixgbevf

version: 2.12.1-k

firmware-version:

bus-info: 0000:05:12.0

supports-statistics: yes

supports-test: yes

supports-eeprom-access: no

supports-register-dump: yes

supports-priv-flags: no

 

#ethtool -k eth10

Features for eth10:

rx-checksumming: on

tx-checksumming: on

        tx-checksum-ipv4: off [fixed]

        tx-checksum-ip-generic: on

        tx-checksum-ipv6: off [fixed]

        tx-checksum-fcoe-crc: off [fixed]

        tx-checksum-sctp: on

scatter-gather: on

        tx-scatter-gather: on

        tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: on

        tx-tcp-segmentation: on

        tx-tcp-ecn-segmentation: off [fixed]

        tx-tcp6-segmentation: on

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: on

generic-receive-offload: on

large-receive-offload: off [fixed]

rx-vlan-offload: on [fixed]

tx-vlan-offload: on [fixed]

ntuple-filters: off [fixed]

receive-hashing: off [fixed]

highdma: on [fixed]

rx-vlan-filter: on [fixed]

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: off [fixed]

tx-gre-segmentation: off [fixed]

tx-ipip-segmentation: off [fixed]

tx-sit-segmentation: off [fixed]

tx-udp_tnl-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

l2-fwd-offload: off [fixed]

busy-poll: on [fixed]

hw-tc-offload: off [fixed]

-------------------------------

 

Is there any advice?

RSS Support in IXGBEVF

$
0
0

I am using 3.3.2 ixgbevf in guest and 5.0.4 ixgbe on host. I noticed that even though there are two RX queues for each VF, but pretty much all the packets go to the first RX queue. The RSS related features are also not supported. How to turn on the RSS support on ixgbevf? If it does not support RSS, what's the point having multiple queues in ixgbevf?

 

root@ubuntu14:~# ethtool -n eth1

2 RX rings available

rxclass: Cannot get RX class rule count: Operation not supported

RX classification rule retrieval failed

root@ubuntu14:~# ethtool -x eth1

Cannot get RX flow hash indirection table: Operation not permitted

 

root@ubuntu14:~# ethtool -g eth1

Ring parameters for eth1:

Pre-set maximums:

RX: 4096

RX Mini: 0

RX Jumbo: 0

TX: 4096

Current hardware settings:

RX: 512

RX Mini: 0

RX Jumbo: 0

TX: 1024

 

 

root@ubuntu14:~# ethtool -k eth1

Features for eth1:

rx-checksumming: on

tx-checksumming: on

  tx-checksum-ipv4: on

  tx-checksum-ip-generic: off [fixed]

  tx-checksum-ipv6: on

  tx-checksum-fcoe-crc: off [fixed]

  tx-checksum-sctp: off [fixed]

scatter-gather: on

  tx-scatter-gather: on

  tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: on

  tx-tcp-segmentation: on

  tx-tcp-ecn-segmentation: off [fixed]

  tx-tcp6-segmentation: on

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: on

generic-receive-offload: on

large-receive-offload: off [fixed]

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: off [fixed]

receive-hashing: off [fixed]

highdma: on [fixed]

rx-vlan-filter: on

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: off [fixed]

tx-gre-segmentation: off [fixed]

tx-ipip-segmentation: off [fixed]

tx-sit-segmentation: off [fixed]

tx-udp_tnl-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

l2-fwd-offload: off [fixed]

busy-poll: on [fixed]

Where can i get the intel IES api user guide?

$
0
0

or this guide is private? not for every developer to use?

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>