Quantcast
Channel: Intel Communities : Discussion List - Wired Ethernet
Viewing all 4405 articles
Browse latest View live

Flow Director configuration not working

$
0
0

Hi:

I am using the i40e_4.4.0 driver for XL710 Network Card. Currently, I am trying to loopback the connections.

 

For this purpose, I had to set the two ports on promiscuous mode. Thus, using my application, I crated custom UDP packets.

 

For the Rx Queue setting, I have set the flow director as:

 

ethtool -N ens1f0 flow-type udp4 dst-port 319 action 3 loc

ethtool -N ens1f1 flow-type udp4 dst-port 319 action 3 loc

 

Essentially, I want all the packets with this dst-port to be forwarded to Queue 3. I can also see the rule has been inserted.

 

But, as seen in the attached picture, the flow director is not able to match the incoming packet. Thus, it does not forward the incoming packet to my desired queue.

 

proc-interrupts.png

 

Is this error due to promiscuous mode that I had set on the NIC ports ?

 

I am not sure what's creating this issue. Also, I have verified that the incoming packet is destined for Port 319.

 

I will be able to provide other details, if needed !

 

I would appreciate any help.

 

Thanks !


Re: Intel 82579V drops connection every hour on windows 10

$
0
0

I encountered the same Problem.

Additional Information:

before 23.07.17 the system worked properly with windows 10 64 home.

in the eventlog of the Adapter i can find an update from 23.07.17 22:23.

Since this update windows presents after system start an Intel 82579LM insted of Intel 82579V in the device manger.

A klick on the recongnize changed hardware button solves the Problem for some time, (I didn't test if it reoccures after longer work, e.g. one hour)

But always after a system reboot or cold boot the same problem occures again.

The latest driver is installed,

Cable an router checked.

 

Any suggestions?

XL710 Malicious Driver Detection Event Occured

$
0
0

Hello, I've got some abnormal event in XL710 like as below.

 

헤더 1

kernel: i40e 0000:03:00.0: Malicious Driver Detection event 0x02 on TX queue 12 PF number 0x00 VF number 0x00

kernel: i40e 0000:03:00.0: TX driver issue detected, PF reset issued

 

by upper msg, my XL710 NIC was reset, and then link down/up occured.

 

I want to know, what does means "Malicious Driver Detection in XL710",

And, What is trigging condition that situation?

 

Thank you

X710-4 NVM Tool Reports "Update not found"

$
0
0

Hi, I have several X710-DA4 that I purchased at different times, and some of them I was able to grab the latest firmware (5.05) and upgrade them. nvmupdate64e and ethool show this on the good ones:

 

driver: i40e

version: 1.6.42

firmware-version: 5.05 0x8000289d 1.1568.0

bus-info: 0000:85:00.2

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

WARNING: To avoid damage to your device, do not stop the update or reboot or power off the system during this update.

Inventory in progress. Please wait [.........*]

Num Description                               Ver. DevId S:B    Status

=== ======================================== ===== ===== ====== ===============

01) Intel(R) Ethernet Converged Network       5.05  1572 00:004 Up to date

    Adapter X710-4

02) Intel(R) I350 Gigabit Network Connection  1.99  1521 00:129 Update not

                                                                available

03) Intel(R) Ethernet Converged Network       5.05  1572 00:133 Up to date

    Adapter X710-4

 

On the other box, it will not let me upgrade:

 

driver: i40e

version: 2.0.23

firmware-version: 4.10 0x800011c5 0.0.0

bus-info: 0000:01:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

WARNING: To avoid damage to your device, do not stop the update or reboot or power off the system during this update.

Inventory in progress. Please wait [|.........]

 

Num Description                               Ver. DevId S:B    Status

=== ======================================== ===== ===== ====== ===============

01) Intel(R) Ethernet Converged Network       4.10  1572 00:001 Update not

    Adapter X710-4                                              available

02) Intel(R) I350 Gigabit Network Connection  1.99  1521 00:129 Update not

                                                                available

03) Intel(R) Ethernet Converged Network       4.10  1572 00:130 Update not

    Adapter X710-4                                              available

 

Does anyone know what's wrong?

Intel X710-DA4 / VMware ESXi 6.5u1 - Malicious Driver Detection Event Occured

$
0
0

Hello,

 

we're having problems with an Intel X710-DA4 retail card on VMware ESXi 6.5u1. After some time (usually minutes to hours) of sustained traffic on the NIC, we're seeing the following in vmkernel.log:

 

2017-08-11T12:26:02.554Z cpu18:66233)i40en: i40en_HandleMddEvent:6495: Malicious Driver Detection event 0x02 on TX queue 0 PF number 0x03 VF number 0x00

2017-08-11T12:26:02.554Z cpu18:66233)i40en: i40en_HandleMddEvent:6521: TX driver issue detected, PF reset issued

 

 

The network port in question is then apparently shut down, although the link stays up, and it does not pass any more network traffic. Only a reboot of the server will reset the network port and allow traffic to flow through it again.

The traffic pattern that leads to that issue usually is TCP traffic of >300MBit/s passing through a firewall virtual machine, entering on one virtual interface and exiting through another.

 

We are using ESXi 6.5u1 with the built-in i40en driver, as well as the latest NVM firmware version 5.05:

 

000:82:00.0 8086:1572 8086:0004 vmkernel vmnic2

0000:82:00.1 8086:1572 8086:0000 vmkernel vmnic3

0000:82:00.2 8086:1572 8086:0000 vmkernel vmnic4

0000:82:00.3 8086:1572 8086:0000 vmkernel vmnic5

 

esxcli network nic get -n vmnic3

   Advertised Auto Negotiation: false

   Advertised Link Modes: 10000BaseSR/Full

   Auto Negotiation: false

   Cable Type: FIBRE

   Current Message Level: -1

   Driver Info:

         Bus Info: 0000:82:00:1

         Driver: i40en

         Firmware Version: 5.05 0x80002898 1.1568.0

         Version: 1.3.1

   Link Detected: true

   Link Status: Up

   Name: vmnic3

   (...)

 

More details to curtail the problem:

  • We are not using SR-IOV.
  • The exact driver version is i40en 1.3.1-5vmw.650.1.26.5969303. We have observed the same issue with a previous driver version 1.3.1-1OEM.600.0.0.2768847.
  • The issue happens on multiple hosts, all with the same Intel X710-DA4 adapter.

 

VMware Support has not been able to resolve the issue for us, saying they have been observing issues with all current X710 drivers and cannot point us into any specific direction - other than asking us to turn to Intel for support.

Honestly, at this point we're at our wits end and do not know how to proceed any further - other than switching to a different manufacturer's network hardware altogether.

 

Thank you for any helpful advice.

What SFP transceivers are supported by i350-F2/i350-F4?

$
0
0

I have looked, but had been unable to locate the answer to this most basic of questions regarding these popular Intel gigabit fiber server NICs.    The i710 10G fiber NIC supports several different Intel branded multi-speed tranceivers, but no information is available for it's cousin.

XL710 NVM v5.0.5 - TX driver issue detected, PF reset issued & I40E_ERR_ADMIN_QUEUE_ERROR

$
0
0

I've been having issues with TX driver issue detected, PF reset issued and fail to add cloud filter for quite some time across 12 VMware ESXi v6.0 hosts. About once a week the result is a purple screen of death (PSOD).

 

I recently upgraded the XL710 to NVM Firmware v5.0.5 and the VMware ESXi XL710 driver to the latest v2.0.6 on 4 of the 12 and the issues persist.

 

# ethtool -i vmnic2

driver: i40e

version: 2.0.6

firmware-version: 5.05 0x800028a6 1.1568.0

bus-info: 0000:03:00.0

 

Q. In trying to identify the culprit of , how do I identify the VM by filter_id?

Q. What is causing the "TX driver issue detected, PF reset issued"?

Q. How can I further troubleshoot to resolve the issue?

 

Here is just a snippet of /var/log/vmkernel.log. The logs are filled with the same repeating error messages:

Note the frequency of the error messages (~50 per minute!)

 

2017-05-26T16:01:04.659Z cpu26:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:04.659Z cpu26:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:04.660Z cpu26:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:04.660Z cpu26:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:05.347Z cpu11:33354)<6>i40e 0000:05:00.2: TX driver issue detected, PF reset issued

2017-05-26T16:01:05.538Z cpu38:33367)<6>i40e 0000:05:00.2: i40e_open: Registering netqueue ops

2017-05-26T16:01:05.547Z cpu38:33367)IntrCookie: 1915: cookie 0x38 moduleID 4111 <i40e-vmnic4-TxRx-0> exclusive, flags 0x25

2017-05-26T16:01:05.556Z cpu38:33367)IntrCookie: 1915: cookie 0x39 moduleID 4111 <i40e-vmnic4-TxRx-1> exclusive, flags 0x25

2017-05-26T16:01:05.566Z cpu38:33367)IntrCookie: 1915: cookie 0x3a moduleID 4111 <i40e-vmnic4-TxRx-2> exclusive, flags 0x25

2017-05-26T16:01:05.575Z cpu38:33367)IntrCookie: 1915: cookie 0x3b moduleID 4111 <i40e-vmnic4-TxRx-3> exclusive, flags 0x25

2017-05-26T16:01:05.585Z cpu38:33367)IntrCookie: 1915: cookie 0x3c moduleID 4111 <i40e-vmnic4-TxRx-4> exclusive, flags 0x25

2017-05-26T16:01:05.594Z cpu38:33367)IntrCookie: 1915: cookie 0x3d moduleID 4111 <i40e-vmnic4-TxRx-5> exclusive, flags 0x25

2017-05-26T16:01:05.604Z cpu38:33367)IntrCookie: 1915: cookie 0x3e moduleID 4111 <i40e-vmnic4-TxRx-6> exclusive, flags 0x25

2017-05-26T16:01:05.613Z cpu38:33367)IntrCookie: 1915: cookie 0x3f moduleID 4111 <i40e-vmnic4-TxRx-7> exclusive, flags 0x25

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 1 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 2 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 3 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 4 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 5 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 6 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 7 not allocated

2017-05-26T16:01:05.660Z cpu26:32886)<6>i40e 0000:05:00.2: Netqueue features supported: QueuePair   Latency Dynamic Pre-Emptible

2017-05-26T16:01:05.660Z cpu26:32886)<6>i40e 0000:05:00.2: Supporting next generation VLANMACADDR filter

2017-05-26T16:01:09.659Z cpu21:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:09.659Z cpu21:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:09.660Z cpu21:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:09.660Z cpu21:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:14.659Z cpu21:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:14.659Z cpu21:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:14.660Z cpu21:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:14.660Z cpu21:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:19.659Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:19.659Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:19.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:19.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:24.659Z cpu24:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:24.659Z cpu24:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:24.661Z cpu24:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:24.661Z cpu24:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:29.659Z cpu22:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:29.659Z cpu22:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:29.660Z cpu22:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:29.660Z cpu22:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:34.659Z cpu23:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:34.659Z cpu23:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:34.660Z cpu23:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:34.660Z cpu23:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:39.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:39.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:39.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:39.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:44.659Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:44.659Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:44.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:44.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:49.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:49.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:49.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:49.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:01:54.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:54.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:54.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:54.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:01:59.659Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:59.659Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:01:59.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:01:59.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:04.659Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:04.659Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:04.660Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:04.660Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:09.659Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:09.659Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:09.660Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:09.660Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:14.661Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:14.661Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:14.662Z cpu25:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:14.662Z cpu25:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:19.659Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:19.659Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:19.660Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:19.660Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:24.660Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:24.660Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:24.661Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:24.661Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:29.661Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:29.661Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:29.662Z cpu30:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:29.662Z cpu30:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:34.659Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:34.659Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:34.660Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:34.660Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:39.661Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:39.661Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:39.662Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:39.662Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:44.661Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:44.661Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:44.662Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:44.662Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:49.661Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:49.661Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:49.662Z cpu37:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:49.662Z cpu37:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:02:54.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:54.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:54.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:54.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:02:59.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:59.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:02:59.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:02:59.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:04.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:04.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:04.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:04.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:09.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:09.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:09.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:09.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:14.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:14.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:14.662Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:14.662Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:19.661Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:19.661Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:19.663Z cpu38:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:19.663Z cpu38:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:24.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:24.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:24.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:24.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:29.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:29.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:29.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:29.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:34.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:34.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:34.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:34.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:39.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:39.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:39.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:39.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:44.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:44.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:44.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:44.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:49.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:49.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:49.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:49.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

2017-05-26T16:03:54.660Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:54.660Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:54.661Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:54.661Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33312, queue = 2

2017-05-26T16:03:59.662Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:59.662Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33056, queue = 1

2017-05-26T16:03:59.663Z cpu27:32886)<3>i40e 0000:05:00.1: fail to add cloud filter, err I40E_ERR_ADMIN_QUEUE_ERROR aq_err I40E_AQ_RC_EEXIST

2017-05-26T16:03:59.663Z cpu27:32886)<6>i40e 0000:05:00.1: Failed to add cloud filter, err_code = -53, last status = 13, filter_id = 33568, queue = 3

Intel X710-DA4 / VMware ESXi 6.5u1 - Malicious Driver Detection Event Occured-can't even get a VM to boot

$
0
0

I can't even get a VM to boot when I use the i40en driver v1.3.1 under ESX v6.0u2. As soon as I power on a VM the system crashes with Malicious Driver Detection and all traffic stops.

 

I've had to fall back to using the i40e v2.0.6 (https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI60-INTEL-I40E-206&productId=491).

 

Just as you said, with v1.3.1 any decent amount of network traffic can  trigger this issue which stops ALL network traffic and requires a reboot.

2017-08-11T23:59:52.735Z cpu38:33417)i40en: i40en_HandleMddEvent:6484: Malicious Driver Detection event 0x01 on TX queue 1 PF number 0x02 VF number 0x1e

2017-08-11T23:59:52.735Z cpu38:33417)i40en: i40en_HandleMddEvent:6510: TX driver issue detected, PF reset issued

2017-08-12T00:00:00.235Z cpu38:33417)i40en: i40en_HandleMddEvent:6484: Malicious Driver Detection event 0x02 on TX queue 0 PF number 0x02 VF number 0x00

2017-08-12T00:00:00.235Z cpu38:33417)i40en: i40en_HandleMddEvent:6510: TX driver issue detected, PF reset issued

 

 

With v2.0.6, traffic hiccups but keeps flowing as soon as the driver resets (>1 sec) which usually doesn't cause an issue. This usually occurs about 100x a day across my 8 node VMware Cluster.

I do have occasions where the (TX driver issue detected, PF reset issued) occurs continuously and then it ends up causing an outage.

2017-05-26T16:01:05.347Z cpu11:33354)<6>i40e 0000:05:00.2: TX driver issue detected, PF reset issued

2017-05-26T16:01:05.538Z cpu38:33367)<6>i40e 0000:05:00.2: i40e_open: Registering netqueue ops

2017-05-26T16:01:05.547Z cpu38:33367)IntrCookie: 1915: cookie 0x38 moduleID 4111 <i40e-vmnic4-TxRx-0> exclusive, flags 0x25

2017-05-26T16:01:05.556Z cpu38:33367)IntrCookie: 1915: cookie 0x39 moduleID 4111 <i40e-vmnic4-TxRx-1> exclusive, flags 0x25

2017-05-26T16:01:05.566Z cpu38:33367)IntrCookie: 1915: cookie 0x3a moduleID 4111 <i40e-vmnic4-TxRx-2> exclusive, flags 0x25

2017-05-26T16:01:05.575Z cpu38:33367)IntrCookie: 1915: cookie 0x3b moduleID 4111 <i40e-vmnic4-TxRx-3> exclusive, flags 0x25

2017-05-26T16:01:05.585Z cpu38:33367)IntrCookie: 1915: cookie 0x3c moduleID 4111 <i40e-vmnic4-TxRx-4> exclusive, flags 0x25

2017-05-26T16:01:05.594Z cpu38:33367)IntrCookie: 1915: cookie 0x3d moduleID 4111 <i40e-vmnic4-TxRx-5> exclusive, flags 0x25

2017-05-26T16:01:05.604Z cpu38:33367)IntrCookie: 1915: cookie 0x3e moduleID 4111 <i40e-vmnic4-TxRx-6> exclusive, flags 0x25

2017-05-26T16:01:05.613Z cpu38:33367)IntrCookie: 1915: cookie 0x3f moduleID 4111 <i40e-vmnic4-TxRx-7> exclusive, flags 0x25

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 1 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 2 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 3 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 4 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 5 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 6 not allocated

2017-05-26T16:01:05.659Z cpu26:32886)<6>i40e 0000:05:00.2: Tx netqueue 7 not allocated

2017-05-26T16:01:05.660Z cpu26:32886)<6>i40e 0000:05:00.2: Netqueue features supported: QueuePair  Latency Dynamic Pre-Emptible

2017-05-26T16:01:05.660Z cpu26:32886)<6>i40e 0000:05:00.2: Supporting next generation VLANMACADDR filter

 

 

Intel Support has not been helpful in resolving these issues. They suggested disabling TSO/LRO but that didn't make a noticeable difference.

 

Maybe one day Intel will take the VMware i40e/i40en driver issues seriously and attempt to fix them. I've been dealing with this for 2+ years with no end insight.


Re: SR-IOV with IXGBE - Vlan packets getting spoofed-kernal 4.4.77 ixgbe 5.2.1

$
0
0

I have the same problem with kernel 4.4.77, ixgbe driver version 5.2.1 and ixgbevf 4.2.1. OS ALT Linux.

 

Spoof cheking was disabled, but no ping in two VM with the same vlan.

dmesg host:

[162311.173561] ixgbe 0000:05:00.1 eth0: 2 Spoofed packets detected

[162313.177679] ixgbe 0000:05:00.1 eth0: 1 Spoofed packets detected

[162315.181748] ixgbe 0000:05:00.1 eth0: 2 Spoofed packets detected

[162333.211099] ixgbe 0000:05:00.1 eth0: 1 Spoofed packets detected

[162337.217085] ixgbe 0000:05:00.1 eth0: 1 Spoofed packets detected

[162339.220074] ixgbe 0000:05:00.1 eth0: 2 Spoofed packets detected

 

# ip li show eth0

7: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master ovs-system state UP mode DEFAULT group default qlen 1000

    link/ether a0:36:9f:25:80:5e brd ff:ff:ff:ff:ff:ff

    vf 0 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off

    vf 1 MAC da:55:a4:db:0f:d5, spoof checking off, link-state auto, trust off, query_rss off

    vf 2 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off

    vf 3 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off

    vf 4 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off

    vf 5 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off

    vf 6 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off

    vf 7 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off

    vf 8 MAC 9a:1f:86:df:b1:d8, spoof checking off, link-state auto, trust off, query_rss off

    vf 9 MAC aa:b7:85:e1:1b:06, spoof checking off, link-state auto, trust off, query_rss off

    vf 10 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off

    vf 11 MAC 9a:1f:86:df:b1:d8, spoof checking off, link-state auto, trust off, query_rss off

    vf 12 MAC aa:b7:85:e1:1b:06, spoof checking off, link-state auto, trust off, query_rss off

    vf 13 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust off, query_rss off

Re: X710/X557-AT Malicious Driver Detection Event Occured-can disable in the driver ?

$
0
0

Hi,

 

I am also encountering this issue (4-port card).

 

X710/X557-AT 10GBASE-T :  FW: 5.05

 

 

Any ideas? Can the feature be disabled in the driver?

X710 Flow director issues on Linux

$
0
0

Hello all,

 

I am not able to setup Flow Director to filter flow type ipv4. It did not seems to have this issue when flow type is specified as tcp.

Its on Linux(4.9.27), freshly download driver off kernel. Below there is output of the driver version, firmware and the ntuple filter  I want to apply on.

No error shown anywhere.

 

Thank you!

 

ethtool -i i40e1

driver: i40e

version: 2.0.23

firmware-version: 5.05 0x80002927 1.1313.0

expansion-rom-version:

bus-info: 0000:05:00.1

supports-statistics: yes

supports-test: yes

supports-eeprom-access: yes

supports-register-dump: yes

supports-priv-flags: yes

 

ethtool -k i40e1

Features for i40e1:

rx-checksumming: off

tx-checksumming: off

        tx-checksum-ipv4: off

        tx-checksum-ip-generic: off [fixed]

        tx-checksum-ipv6: off

        tx-checksum-fcoe-crc: off [fixed]

        tx-checksum-sctp: off

scatter-gather: on

        tx-scatter-gather: on

        tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: off

        tx-tcp-segmentation: off

        tx-tcp-ecn-segmentation: off

        tx-tcp-mangleid-segmentation: off

        tx-tcp6-segmentation: off

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: off

generic-receive-offload: off

large-receive-offload: off [fixed]

rx-vlan-offload: on

tx-vlan-offload: on

ntuple-filters: on

receive-hashing: on

highdma: on

rx-vlan-filter: on [fixed]

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: off [fixed]

tx-gre-segmentation: on

tx-gre-csum-segmentation: off [fixed]

tx-ipxip4-segmentation: on

tx-ipxip6-segmentation: off [fixed]

tx-udp_tnl-segmentation: on

tx-udp_tnl-csum-segmentation: off [fixed]

tx-gso-partial: off [fixed]

tx-sctp-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

l2-fwd-offload: off [fixed]

busy-poll: off [fixed]

hw-tc-offload: off [fixed]

 

i40e version:        2.0.23

 

#ethtool -U i40e1 flow-type ip4 action -1 loc 1

XL710 Stops Receiving Packets After a Particular PPPoE Packet

$
0
0

HI Everyone,

 

We are using XL710 hardware on Linux 4.1.20 kernel.

 

We have an issue where the following behavior is noticed.

 

  • When we send a simple loop back traffic to the XL710, it works fine.
  • When a specific PPPoE packet is sent from an external port to the XL710, on Linux we notice that the XL710 driver has no response. There is no interrupt is raised for this received packet.
  • After the above condition, XL710 stops receiving any packet.

 

Here is the packet contents for a good packet and the failing packet.

 

I can send any number of good packets, and XL710 is able to received it.

After I send a single failing packet, XL710 stop receiving packets. In fact, it does not receive even the good packets after this.

 

 

Good Packet:

 

Frame 2: 128 bytes on wire (1024 bits), 124 bytes captured (992 bits) on interface 0

    Interface id: 0 (\\.\pipe\view_capture_172-27-5-51_6_89_07182017_154149)

    Encapsulation type: Ethernet (1)

    Arrival Time: Jul 18, 2017 15:41:13.680293000 India Standard Time

    [Time shift for this packet: 0.000000000 seconds]

    Epoch Time: 1500372673.680293000 seconds

    [Time delta from previous captured frame: 0.496532000 seconds]

    [Time delta from previous displayed frame: 0.496532000 seconds]

    [Time since reference or first frame: 0.496532000 seconds]

    Frame Number: 2

    Frame Length: 128 bytes (1024 bits)

    Capture Length: 124 bytes (992 bits)

    [Frame is marked: False]

    [Frame is ignored: False]

    [Protocols in frame: eth:ethertype:mpls:pwethheuristic:pwethcw:eth:ethertype:vlan:ethertype:vlan:ethertype:pppoes:ppp:ipcp]

Ethernet II, Src: Performa_00:00:02 (00:10:94:00:00:02), Dst: DeltaNet_f9:28:42 (00:30:ab:f9:28:42)

    Destination: DeltaNet_f9:28:42 (00:30:ab:f9:28:42)

        Address: DeltaNet_f9:28:42 (00:30:ab:f9:28:42)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Source: Performa_00:00:02 (00:10:94:00:00:02)

        Address: Performa_00:00:02 (00:10:94:00:00:02)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Type: MPLS label switched packet (0x8847)

MultiProtocol Label Switching Header, Label: 1 (Router Alert), Exp: 0, S: 1, TTL: 64

    0000 0000 0000 0000 0001 .... .... .... = MPLS Label: Router Alert (1)

    .... .... .... .... .... 000. .... .... = MPLS Experimental Bits: 0

    .... .... .... .... .... ...1 .... .... = MPLS Bottom Of Label Stack: 1

    .... .... .... .... .... .... 0100 0000 = MPLS TTL: 64

PW Ethernet Control Word

    Sequence Number: 0

Ethernet II, Src: Performa_00:00:03 (00:10:94:00:00:03), Dst: Superlan_00:00:01 (00:00:01:00:00:01)

    Destination: Superlan_00:00:01 (00:00:01:00:00:01)

        Address: Superlan_00:00:01 (00:00:01:00:00:01)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Source: Performa_00:00:03 (00:10:94:00:00:03)

        Address: Performa_00:00:03 (00:10:94:00:00:03)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Type: 802.1Q Virtual LAN (0x8100)

802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 100

    000. .... .... .... = Priority: Best Effort (default) (0)

    ...0 .... .... .... = CFI: Canonical (0)

    .... 0000 0110 0100 = ID: 100

    Type: 802.1Q Virtual LAN (0x8100)

802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 100

    000. .... .... .... = Priority: Best Effort (default) (0)

    ...0 .... .... .... = CFI: Canonical (0)

    .... 0000 0110 0100 = ID: 100

    Type: PPPoE Session (0x8864)

PPP-over-Ethernet Session

    0001 .... = Version: 1

    .... 0001 = Type: 1

    Code: Session Data (0x00)

    Session ID: 0x0001

    Payload Length: 74

Point-to-Point Protocol

    Protocol: Internet Protocol Control Protocol (0x8021)

PPP IP Control Protocol

    Code: Configuration Request (1)

    Identifier: 2 (0x02)

    Length: 10

    Options: (6 bytes), IP address

        IP address: 0.0.0.0

            Type: IP address (3)

            Length: 6

            IP Address: 0.0.0.0

 

Failing Packet:

 

Frame 1: 128 bytes on wire (1024 bits), 124 bytes captured (992 bits) on interface 0

    Interface id: 0 (\\.\pipe\view_capture_172-27-5-51_6_89_07182017_154149)

    Encapsulation type: Ethernet (1)

    Arrival Time: Jul 18, 2017 15:41:13.183761000 India Standard Time

    [Time shift for this packet: 0.000000000 seconds]

    Epoch Time: 1500372673.183761000 seconds

    [Time delta from previous captured frame: 0.000000000 seconds]

    [Time delta from previous displayed frame: 0.000000000 seconds]

    [Time since reference or first frame: 0.000000000 seconds]

    Frame Number: 1

    Frame Length: 128 bytes (1024 bits)

    Capture Length: 124 bytes (992 bits)

    [Frame is marked: False]

    [Frame is ignored: False]

    [Protocols in frame: eth:ethertype:mpls:pwethheuristic:pwethcw:eth:ethertype:vlan:ethertype:vlan:ethertype:pppoes:ppp:ipcp]

Ethernet II, Src: Performa_00:00:02 (00:10:94:00:00:02), Dst: DeltaNet_f9:28:42 (00:30:ab:f9:28:42)

    Destination: DeltaNet_f9:28:42 (00:30:ab:f9:28:42)

        Address: DeltaNet_f9:28:42 (00:30:ab:f9:28:42)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Source: Performa_00:00:02 (00:10:94:00:00:02)

        Address: Performa_00:00:02 (00:10:94:00:00:02)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Type: MPLS label switched packet (0x8847)

MultiProtocol Label Switching Header, Label: 1 (Router Alert), Exp: 0, S: 1, TTL: 64

    0000 0000 0000 0000 0001 .... .... .... = MPLS Label: Router Alert (1)

    .... .... .... .... .... 000. .... .... = MPLS Experimental Bits: 0

    .... .... .... .... .... ...1 .... .... = MPLS Bottom Of Label Stack: 1

    .... .... .... .... .... .... 0100 0000 = MPLS TTL: 64

PW Ethernet Control Word

    Sequence Number: 0

Ethernet II, Src: Performa_00:00:03 (00:10:94:00:00:03), Dst: Superlan_00:00:01 (00:00:01:00:00:01)

    Destination: Superlan_00:00:01 (00:00:01:00:00:01)

        Address: Superlan_00:00:01 (00:00:01:00:00:01)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Source: Performa_00:00:03 (00:10:94:00:00:03)

        Address: Performa_00:00:03 (00:10:94:00:00:03)

        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)

        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)

    Type: 802.1Q Virtual LAN (0x8100)

802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 100

    000. .... .... .... = Priority: Best Effort (default) (0)

    ...0 .... .... .... = CFI: Canonical (0)

    .... 0000 0110 0100 = ID: 100

    Type: 802.1Q Virtual LAN (0x8100)

802.1Q Virtual LAN, PRI: 0, CFI: 0, ID: 100

    000. .... .... .... = Priority: Best Effort (default) (0)

    ...0 .... .... .... = CFI: Canonical (0)

    .... 0000 0110 0100 = ID: 100

    Type: PPPoE Session (0x8864)

PPP-over-Ethernet Session

    0001 .... = Version: 1

    .... 0001 = Type: 1

    Code: Session Data (0x00)

    Session ID: 0x0001

    Payload Length: 74

Point-to-Point Protocol

    Protocol: Internet Protocol Control Protocol (0x8021)

PPP IP Control Protocol

    Code: Configuration Request (1)

    Identifier: 2 (0x02)

    Length: 10

    Options: (6 bytes), IP address

        IP address: 20.6.0.23

            Type: IP address (3)

            Length: 6

            IP Address: 20.6.0.23

 

Regards,

Sadashivan

Re: SR-IOV with IXGBE - Vlan packets getting spoofed-using 82599ES in Mirantis Fuel 9.2

$
0
0

Hi Sharon and Pratik,

 

I have the exact same need - I have an 82599ES-based SR-IOV setup using a Mirantis Fuel 9.2 OpenStack deployment where I am trying to pass VLAN-tagged packets out of a KVM VM instance with the VLAN kept intact.  Just like Pratik, I can work with a situation where either the tag applied by the virtual machine is retained with no tag added to it on egress or (preferably) a situation where the inner tag applied by the VM is preserved and an outer tag is added by the network adapter.

 

I have a full lab environment set up to test this and I will be compiling both drivers. However, I'm curious if there is a specific fix that addresses this issue and if you have any additional information on what behavior I should expect. Will the ixgbe 5.1 driver and the ixgbevf 4.1 driver allow both of the situations I described, i.e. either one tag or double tags leaving the VM?  Thank you for any clarification you can provide,

 

A.C.

******

Intel(R) Ethernet Connection (2) I219-V whats with the (2)?

$
0
0

I have an Asus Z170 Pro Gaming motherboard with the I219-V LAN port but even on a fresh Windows 7 or Windows 10 installation, the port always shows up as having the (2) prefix even though its the only LAN port available and its the first time drivers have been installed for it (for that OS installation after formatting the HDD). The driver version I'm currently using is 12.15.25.6 from earlier this year.

 

I'm pretty sure that at some stage I've seen it as simply Intel(R) Ethernet Connection I219-V and I'd like to get back to that but how? FWIW the hardware is working fine, I'm just fussy about my system configuration and would like this to be as originally intended.

I219-V.jpg

Enabling 802.1q tagging for I218-LM on Win10 Enterprise (aka Anniversary Edition)

$
0
0

Hi Folks,

 

  Working with VMWare Workstation Pro on an x64 Win10 enterprise machine (10.0.14393) (aka Redstone 1 or Anniversary Edition), attempting to get output from different VMs onto different .1q tagged VLANs on a trunk port.  Unfortunately, I'm having little success.

 

  Based on my reading, if I install the most current Intel driver package, 22.4.0.1, I should either be unable to create VLANs off the NIC (if I'm missing needed win10 updates), or they should work.  I seem to be in a middle ground of some kind.  I can create the VLANs just fine.  Unfortunately, I get untagged output from the physical port for everything assigned to any VLAN.  I doubt it's a sniffer problem since I can see .1q tags from a switch when capturing via the same external USB NIC.

 

  The Intel driver installer refers me to a list of required Win10 updates in a user guide that I'm not finding anywhere.  Any guidance would be most appreciated.


Having WOL problems with an Intel 82567-LM3 adaptor on a Dell OptiPlex 760 motherboard using Windows 10 Pro 64

$
0
0

I've been trying to troubleshoot this issue now for months on and off and have gotten nowhere.

It seems that WOL is broken when using this hardware under Windows 10.

I've been using the Windows 10 driver that installs by default (version 12.155.22.6 dated 4/5/2016) and have not been able to find a different one (latest ProSet software doesn't contain a driver for this adaptor).

I've set everything up that various internet threads I've read have said to do, this includes:

BIOS:

Enable the adaptor (duh)

Enable Remote Wake Up

Disable Low Power Mode

Windows:

Disable Fast Startup

Make sure, under the Power Management tab, that all the boxes are checked.

There are no properties for "Waking" in Advanced Properties for the adaptor so I can't set them, nor is there a PME Event Property to check/change.

---

I know that WOL is possible for this machine because I popped in a PCI NIC and got WOL to work on that NIC, but this integrated adaptor just won't comply.

Lot's of threads out there saying it can be done, but I can't seem to do it on my hardware.

Here's a capture from BOOTUTIL showing that WOL is supposed to work...

---

C:\Intel22.4.0.1\APPS\BootUtil\Winx64>BOOTUTILW64E.EXE

Intel(R) Ethernet Flash Firmware Utility
BootUtil version 1.6.40.1
Copyright (C) 2003-2017 Intel Corporation

Port   Network Address     Location    Series     WOL Flash Firmware                                     Version
==== =============== ======== ======= ===  ============================= =======
1       0024E819DC99        0:25.0       Gigabit    YES  FLASH Not Present

---

One suggestion I found was to update the boot agent (mine is 1.3.81) to the latest (1.5.something) and then throw a flag in the boot agent setup that supports WOL.

However, I do not think I can update the boot agent as BOOTUTIL says there's no flash firmware present to do so, or correct me if I'm wrong.

Ideas on how to update the boot agent?

Aside from that any other ideas?

problem of installing the igb-5.3.3.10 driver

$
0
0

hello all:

          I am receiving the follwoing error when I installed the igb-5.3.5.10

         

[ 2617.676535] igb 0000:04:00.0: The NVM Checksum Is Not Valid

[ 2617.702549] igb 0000:04:00.1: The NVM Checksum Is Not Valid

[ 2617.728535] igb 0000:04:00.2: The NVM Checksum Is Not Valid

[ 2617.754518] igb 0000:04:00.3: The NVM Checksum Is Not Valid

 

Linux version 3.10.0-327.el7.x86_64

CentOS Linux release 7.2.1511 (Core)

 

 

 

 

 

Please provide troubleshooting steps.

 

thanks.

 

x550-t1/2 Windows Server 2012 R2 driver

$
0
0

I was looking at picking up a few x550-T2 Ethernet adapters. Driver support for Windows Server 2012 R2 is confirmed on these pages:

Intel® Ethernet Controller X550-AT Product Specifications

Intel® Ethernet Converged Network Adapter X550 Product Brief

Intel® Ethernet Adapters Supported in Windows Server 2012 R2*

 

However, when trying to download the driver from here, I can't find it:

Downloads for Intel® Ethernet Converged Network Adapter X550-T2

 

Where is the 2012 driver?

 

I'd also like to confirm that the x550-t2 is using the x550-at2 controller as listed below and not the x550-bt2. The bt2 has physical support for 8 pcie lanes versus only 4 pcie lanes in the at2 which is an issue if one doesn't have pcie 3.0 slots. I just want to know the limitations before purchasing.

Intel® Ethernet Converged Network Adapter X550-T2 Product Specifications

Flow Director configuration not working

$
0
0

Hi:

I am using the i40e_4.4.0 driver for XL710 Network Card. Currently, I am trying to loopback the connections.

 

For this purpose, I had to set the two ports on promiscuous mode. Thus, using my application, I crated custom UDP packets.

 

For the Rx Queue setting, I have set the flow director as:

 

ethtool -N ens1f0 flow-type udp4 dst-port 319 action 3 loc

ethtool -N ens1f1 flow-type udp4 dst-port 319 action 3 loc

 

Essentially, I want all the packets with this dst-port to be forwarded to Queue 3. I can also see the rule has been inserted.

 

But, as seen in the attached picture, the flow director is not able to match the incoming packet. Thus, it does not forward the incoming packet to my desired queue.

 

proc-interrupts.png

 

Is this error due to promiscuous mode that I had set on the NIC ports ?

 

I am not sure what's creating this issue. Also, I have verified that the incoming packet is destined for Port 319.

 

I will be able to provide other details, if needed !

 

I would appreciate any help.

 

Thanks !

UDP packets frozen with at least I219-V, I219-LM and I217-LM

$
0
0

Hello,

 

I am new to this community. I subscribed to ask a question here because I am really stuck.

My company makes electronic devices that use UDP 100 MB Ethernet communication with a PC (under Windows 7, 8.1 or 10).

It works perfectly with other brands of Ethernet adapters; but there is a strange behavious with the Intel NICs we tried (at least I219-V, I219-LM and I217-LM).

 

Basically, our electronic devices can be considered as cameras capturing about 150 images per second.

We send a small command on one socket via UDP to tell it to capture an image, then we receive the compressed image as a set of UDP packets on another socket.

Each packet contains up to 1444 bytes of data (to which one can add the data from the different layers of protocols, which in the end does not exceed the standard packet size => no need to use Jumbo packets).

 

The problem is that, sometimes (this varies from as often as every 5 seconds to as rarely as only once within a 10-mn period), I am waiting forever (until the defined UDP time out) for the data to arrive while it has been sent (I can see it by sniffing the data from another computer connected to the same switch). I could believe that the UDP packet was lost by the Intel NIC, but it has not been lost. If I send a new image capture command, the packets that I was waiting finally arrive, followed by the packets of the new image.

Why are those packets stuck?

Is there any advanced parameter that I could modify from the device driver's configuration window or from a Registry key?

 

Note that I tried updating the Intel driver to recent versions (last one is 22.4.0.1). It seems to perform a little better that some other versions I tried (like the one installed by default in Windows), but none of the versions I tried work perfectly.

 

Many thanks in advance if you can help me understand what is wrong (either from me or from the driver). I have been struggling on this problem for months and we have to equip our customers who own a laptop with an Intel NIC with USB adapters with a NIC made by another brand to bypass the problem.

 

Karl

Viewing all 4405 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>